sentences
sequence
labels
sequence
[ "We take the first step towards multilingual style transfer by creating and releasing XFORMAL , a benchmark of multiple formal reformulations of informal text in Brazilian Portuguese, French, and Italian.", "Results on XFORMAL suggest that state-of-the-art style transfer approaches perform close to simple baselines, indicating that style transfer is even more challenging when moving multilingual.", "1 1 Introduction Style Transfer ( ST ) is the task of automatically transforming text in one style into another (for example, making an impolite request more polite).", "Most work in this growing field has focused primarily on style transfer within English, while covering different languages has received disproportional interest.", "Concretely, out of 35 ST papers we reviewed, all of them report results for ST within English text, while there is just a single work covering each of the following languages: Chinese, Russian, Latvian, Estonian, and French (Shang et al., 2019; Tikhonov et al., 2019; Korotkova et al., 2019; Niu et al., 2018).", "Notably, even though some efforts have been made towards multilingual ST , researchers are limited to providing system outputs as a means of evaluation, and progress is hampered by the scarcity of resources for most languages.", "At the same time, ST lies at the core of human communication: when humans produce language, they condition their choice of grammatical and lexical transformations to a target audience and a specific situation.", "Among the many possible stylistic variations, Heylighen et al. (1999) argue that a dimension similar to formality appears as the most important and universal feature distinguishing styles, registers or genres in different languages .", "Consider the informal excerpts and their formal reformulations in French ( FR ) and Brazilian Portuguese Work done as a Research Intern at Dataminr, Inc. 1 Code and data: https://github.com/Elbria/xformal-FoST BRAZILIAN-PORTUGUESE saiam disso, fora de vontade!!", "get out of it, willpower!!", "Abandonem essa situao, tenham fora de vontade.", "Abandon this situation, have willpower!", "FRENCH Il avait les yeux braqus ailleurs.", "He had his eyes fixed elsewhere.", "ITALIAN in bocca al lupo! good luck!", "Ti rivolgo un sincero augurio!", "I send you a sincere wish!", "Il ne prtait pas attention la situation.", "He was not paying attention to the situation.", "( BR-PT ) in Table 1. Both informal-formal pairs share the same content.", "However, the informal language conveys more information than is contained in the literal meaning of the words (Hovy, 1987).", "These examples relate to the notion of deep formality (Heylighen et al., 1999), where the ultimate goal is that of adding the context needed to disambiguate an expression.", "On the other hand, variations in formality might just reflect different situational and personal factors, as shown in the Italian ( IT ) example.", "This work takes the first step towards a more language-inclusive direction for the field of ST by building the first corpus of style transfer for non-English languages.", "In particular, we make the following contributions: 1. Building upon prior work on Formality Style Transfer ( F o ST ) (Rao and Tetreault, 2018), we contribute an evaluation dataset , XFORMAL that consists of multiple formal rewrites of informal sentences in three Romance languages: Brazilian Portuguese ( BR-PT ), French ( FR ), and Italian ( IT ); 2. Without assuming access to any gold-standard training data for the languages at hand, we benchmark a myriad of leading ST baselines through automatic and human evaluation methods.", "Our results show that F o ST in non-English languages is particularly challenging as complex neural models perform on par with a simple rule-based system consisting of handcrafted transformations.", "We make XFORMAL , our annotations protocols, and analysis code publicly available and hope that this study facilitates and encourages more research towards Multilingual ST .", "Controlling style aspects in generation tasks is studied in monolingual settings with an English-centric focus (intra-language) and cross-lingual settings together with Machine Translation ( MT ) (inter-language).", "Our work rests in intra-language ST with a multilingual focus, in contrast to prior work.", "ST datasets that consist of parallel pairs in different styles include: GYAFC for formality (Rao and Tetreault, 2018), Yelp (Shen et al., 2017) and Amazon Product Reviews for sentiment (He and McAuley, 2016), political slant and gender controlled datasets (Prabhumoye et al., 2018), Expert Style Transfer (Cao et al., 2020), PASTEL for imitating personal (Kang et al., 2019), SIMILE for simile generation (Chakrabarty et al., 2020), and others.", "Intra-language ST was first cast as generation task by Xu et al. (2012) and is addressed through methods that use either parallel data or unpaired corpora of different styles.", "Parallel corpora designed for the task at hand are used to train traditional encoder-decoder architectures (Rao and Tetreault, 2018), learn mappings between latent representation of different styles (Shang et al., 2019), or fine-tune pre-trained models (Wang et al., 2019).", "Other approaches use parallel data from similar tasks to facilitate transfer in the target style via domain adaptation (Li et al., 2019), multi-task learning (Niu et al., 2018; Niu and Carpuat, 2020), and zero-shot transfer (Korotkova et al., 2019) or create pseudo-parallel data via data augmentation techniques (Zhang et al., 2020; Krishna et al., 2020).", "Approaches that rely on non-parallel data include disentanglement methods based on the idea of learning style-agnostic latent representations (e.g., Shen et al. (2017); Hu et al. (2017)).", "However, they are recently criticized for resulting in poor content preservation (Xu et al., 2018; Jin et al., 2019; Luo et al., 2019; Subramanian et al., 2018) and alternatively, translation-based models are proposed that use reconstruction and back-translation losses (e.g., Logeswaran et al. (2018); Prabhumoye et al. (2018)).", "Another line of work, focuses on manipulation methods that remove the style-specific attribute of text (e.g., Li et al. (2018); Xu et al. (2018)), while recent approaches use reinforcement learning (e.g., Wu et al. (2019); Gong et al. (2019), probabilistic formulations (He et al., 2020), and masked language models (Malmi et al., 2020).", "Inter-language ST is introduced by Mirkin and Meunier (2015) who proposed personalized MT for EN -French and EN -German.", "Subsequent MT works control for politeness (Sennrich et al., 2016a), voice (Yamagishi et al., 2016), personality traits (Rabinovich et al., 2017), user-provided terminology (Hasler et al., 2018), gender (Vanmassen-hove et al., 2018), formality (Niu et al., 2017; Feely et al., 2019), morphological variations (Moryossef et al., 2019), complexity (Agrawal and Carpuat, 2019) and reading level (Marchisio et al., 2019).", "We describe the process of collecting formal rewrites using data statements protocols (Bender and Friedman, 2018; Gebru et al., 2018).", "Curation rational To collect XFORMAL , we firstly curate informal excerpts in multiple languages.", "To this end, we follow the procedures described in Rao and Tetreault (2018) (hence-forth RT 18 ) who create a corpus of informal-formal sentence-pairs in English ( EN ) entitled Grammarly's Yahoo Answers Formality Corpus ( GYAFC ).", "Concretely, we use the L 6 Yahoo! Answers corpus that consists of questions and answers posted to the Yahoo! Answers platform.", "2 The corpus contains a large number of informal text and allows control for different languages and different domains.", "3 Similar to the collection of GYAFC , we extract all answers from the Family & Relationships (F&R) topic that correspond to the three languages of interest: Famlia e Relacionamen-tos ( BR-PT ), Relazioni e famiglia ( IT ), and Amour et relations ( FR ) ( Step 1 ).", "We follow the same pre-processing steps as described in RT 18 for consistency ( Step 2 ).", "We filter out answers that:", "a) consist of questions;", "b) include URL s;", "c) have fewer 2 https://webscope.sandbox.yahoo.com/", "than five or more than 25 tokens; or", "d) constitute duplicates.", "4 We automatically extract informal candidate sentences, as described in 5.3 ( Step 3 ).", "Finally, we randomly sample 1 , 000 sentences from the pool of informal candidates for each language.", "Table 2 presents statistics of the curation steps.", "Procedures We use the Amazon Mechanical Turk ( MT urk) platform to collect formal rewrites for our informal sentences.", "For each language, we split the annotation into 20 batches of 50 Human Intelligence Tasks ( HIT s).", "In each HIT , Turkers are given an informal excerpt and asked to generate its formal rewrite in the same language without changing its meaning .", "We collect 4 rewrites per excerpt and release detailed instructions under A.G. Annotation Workflow & Quality Control Our annotation protocol consists of multiple Quality Control ( QC ) steps to ensure the recruitment of high-quality annotators.", "As a first step, we use location restrictions ( QC 1) to limit the pool of workers to countries where native speakers are most likely to be found.", "Next, we run several small pilot studies (of 10 HIT s) to recruit potential workers.", "To participate in the pilot study, Turkers have to pass a qualification test ( QC 2) consisting of multiple-choice questions that test workers' understanding of formality (see A.L).", "The pilot study results are reviewed by a native speaker ( QC 3) of each language to exclude workers who performed consistently poorly.", "We find that the two main reasons for poor quality are:", "a) rewrites of minimum-level edits, or", "b) rewrites that change the input's meaning.", "Table 3 presents the number of workers at each QC step.", "Only workers passing all quality control steps (last row of Table 3) contribute to the final task.", "Finally, we post-process the collected rewrites by", "a) removing instances consisting of normalization-based edits only and", "b) correcting minor spelling errors using an off-the-shelf tool.", "5 4 We tokenize with nltk: https://www.nltk.org/ api/nltk.tokenize.html 5 https://languagetool.org/ Step Description BR-PT FR IT ( QC 1) Location restriction 151 78 59 ( QC 2) Qualification test 54 40 33 ( QC 3) Rewrites review 9 16 11 Table 3: Number of Turkers after each QC step.", "Turkers' demographics We recruit Turkers from Brazil, France/Canada, and Italy for BR-PT , FR , and IT , respectively.", "Beyond their country of residence, no further information is available.", "Compensation We compensate at a rate of $ 0 .", "10 per HIT with additional one-time bonuses that bumps them up to a target rate of over $ 10 /hour.", "After this entire process, we have constructed a high-quality corpus of formality rewrites of 1 , 000 sentences for three languages.", "In the next section, we provide statistics and an analysis of XFORMAL .", "Types of formality edits Following Pavlick and Tetreault (2016), we analyze the most frequent edit operations Turkers perform when formalizing the informal sentences.", "We conduct both an automatic analysis (details in A.I) of the whole set of rewrites, and a human analysis (details in A.H) of a random sample of 200 rewrites per language (we recruited a native speaker for each language).", "Table 4 presents both analyses' results, where we also include the corresponding statistics for the English language ( GYAFC ).", "In general, we observe similar trends across languages: humans make edits covering both the \"noisy-text\" sense of formality (e.g., fixing punctuation, spelling errors, capitalization) and the more situational sense (paraphrase-based edits).", "Although cross-language trends are similar, we also observe differences: deleting fillers and word completion seems to be more prominent in the English rewrites than in other languages; normalizing abbreviations is a considerably frequent edit type for Brazilian Portuguese; paraphrasing is more frequent in the three non-English languages.", "Surface differences of informal-formal pairs We quantify surface-level differences between the informal sentences and formal rewrites via computing their character-level Levenshtein distance (Fig-ure 1) and their pairwise Lexical Difference (LeD) based on the percentages of tokens that are not found in both sentences (Table 5).", "Both analyses show that Italian rewrites have the most edits", "com-(a) BR-PT", "pared to their corresponding informal sentences.", "French and Brazilian Portuguese follow, with English rewrites being closer to the informal inputs.", "a large number of reformulations consist of paraphrase-based edits (more than 50 %), we want to quantify the extent to which the formal rewrites of each sentence are diverse , in terms of their lexical choices.", "To that end, we quantify diversity via measuring selfBLEU (Zhu et al., 2018): considering one set of formal sentences as the hypothesis set and the others as references, we compute BLEU for each formal set and define the average BLEU score as a measure of the dataset's diversity.", "Higher scores imply less diversity of the set.", "Results (last row of Table 4) show that XFORMAL consists of more diverse rewrites compared to GYAFC .", "Formality shift of rewrites We analyze the formality distribution of the original informal sentences with their formal rewrites in GYAFC and XFORMAL , as predicted by formality m BERT models (5.3).", "The distributions of formal rewrites are skewed towards positive values (Figure 2).", "We benchmark eight ST models on XFORMAL to serve as baseline scores for future research.", "We describe the models (5.1), the experimental setting (5.2), the human and automatic evaluation methods (5.3 and 5.4), and results (5.5).", "Simple baselines We define three baselines: 1. COPY Motivated by Pang and Gimpel (2019) who notice that untransferred sentences with no alterations have the highest BLEU score by a large margin for ST tasks, we use this simple baseline as a lower bound; 2. RULE-BASED Based on the quantitative analysis of 4 and similarly to RT 18 , we develop a rule-based approach that performs a set of predefined edits-based operations defined by handcrafted rules.", "Example transformations include fix casing, remove repeated punctuation, handcraft a list of contraction expansionsa detailed description is found at A.C; 3. ROUND-TRIP MT Inspired by Zhang et al. (2020) who identify useful training pairs from the paraphrases generated by round-trip translations of millions of sentences, we devise a simpler baseline that starts from a text in language x , pivots to EN and then backtranslates to x , using the AWS translation service.", "6 NMT-based models with synthetic parallel data We follow the TRANSLATE TRAIN (Conneau et al., 2018; Artetxe et al., 2020) approach to collect data in multilingual settings: we obtain pseudo-parallel corpora in each language via machine translating an EN resource of informal-formal pairs (5.2).", "7 Then, starting with TRANSLATE TRAIN we benchmark the following NMT -based models: 1. TRANSLATE TRAIN TAG extends a leading EN F o ST approach (Niu et al., 2018) and trains a unified model that handles either formality direction via attaching a source tag that denotes the desired target formality; 2. MULTI-TASK TAG-STYLE Niu et al. (2018) augments the previous approach with bilingual data that is automatically identified as formal (5.3).", "The models are then trained in a multi-task fashion; 3. BACKTRANSLATE augments the TRANSLATE TRAIN data with back-translated sentences of automatically detected informal text (Sennrich et al., 2016b), using 1 .", "as the base model.", "We exclude backtranslated pairs consisting of copies.", "The output of the RULE-BASED system is given as input to each model at inference time.", "For all three models, we run each system with 4 random seeds, and combine them in a linear ensemble for decoding.", "LATION ( UNMT ) (Subramanian et al., 2018) defines a pseudo-supervised setting and combines de-noising auto-encoding and back-translation losses; 2. DEEP LATENT SEQUENCE MODEL ( DLSM ) (He et al., 2020) defines a probabilistic generative story that treats two unpaired corpora of separate styles as a partially observed parallel corpus and learns a mapping between them, using variational inference.", "Training data For TRANSLATE TRAIN TAG we use GYAFC , a large set of 110 KEN informal-formal parallel sentence-pairs obtained through crowd-sourcing.", "Additionally, we augment the translated resource with OpenSubtitles (Lison and Tiede-mann, 2016) bilingual data used for training MT models.", "8 Given that bilingual sentence-pairs can be noisy, we perform a filtering step to extract noisy bitexts using the Bicleaner toolkit (Snchez-Cartagena et al.).", "9 Furthermore, we apply the same filtering steps as in 3 (Curation rational).", "Finally, each of the remaining sentences is assigned a formality score (5.3), resulting in two pools of informal and formal text.", "Training instances are then randomly sampled from those pools: formal parallel pairs are used for MULTI-TASK TAG-STYLE ; informal target side sentences are backtranslated for BACKTRANSLATE ; both informal and formal target-side texts are independently sampled from the two pools for training unsupervised models.", "Finally, for unsupervised F o ST in FR , we additionally experiment with in-domain data from the L 26 French Yahoo! Answer Corpus that consists of 1 .", "7 MFR questions.", "10, 11 Table 6 includes statistics on training sizes.", "12 METHODSGYAFC OpenSubs.", "8 Data are available at: http://opus.nlpl.eu/ .", "9 We use the publicly available pretrained Bicleaner models: https://github.com/bitextor/bicleaner , and discard sentences with a score lower than 0 .", "5 .", "10 https://webscope.sandbox.yahoo.com/", "catalog.php?datatype=l&did=74 11 Split into 6 .", "2 M/ 6 .", "6 M formal/informal sentences.", "12 Bilingual data statistics are in A.K. Preprocessing We preprocess data consistently across languages using MOSES (Koehn et al., 2007).", "Our pipeline consists of three steps:", "a) normalization;", "b) tokenization;", "c) true-casing.", "For NMT based approaches, we also learn joint source-target BPE with 32 K operations (Sennrich et al., 2016b).", "Model Implementations For NMT-based and unsupervised models we use the open-sourced im-pementations of Niu et al. (2018) and He et al. (2020), respectively.", "13,14 We include more details on model architectures in A.D. 5.3 Automatic Evaluation Recent work on ST evaluation highlights the lack of standard evaluation practices (Yamshchikov et al., 2020; Pang, 2019; Pang and Gimpel, 2019; Mir et al., 2019).", "We follow the most frequent evaluation metrics used in EN tasks and measure the quality of the system's outputs with respect to four dimensions, while we leave an extensive evaluation of automatic metrics for future work.", "Formality We average the style transfer score of transferred sentences computed by a formality regression model.", "We fine-tune m BERT (Devlin et al., 2019) pre-trained language models on the machine-translated answers genre from Pavlick and Tetreault (2016) that consists of about 4 K human-annotated sentences rated on a 7 -point formality scale.", "To acquire an annotated corpus in the languages of interest, we follow the TRANSLATE TRAIN transfer approach: we propagate the original EN training data's human ratings to their corresponding translations, assuming that translation preserves formality.", "15 To evaluate the multilingual formality regression models' performance, we crowdsourced human judgments of 5 Turkers for 200 sentences per language.", "We report Spearman correlations of 64 ( BR-PT ), 70 ( IT ), 71 ( FR ), and 81 ( EN ).", "Fluency We compute the logarithm of each sen-tence's probabilitycomputed by a 5 -gram Kneser-Ney language model (Kneser and Ney, 1995)and normalize it by the sequence length.", "We train each 13 https://github.com/xingniu/ multitask-ft-fsmt 14 https://github.com/cindyxinyiwang/ deep-latent-sequence-model 15 See A.A for discussion on this assumption.", "Overall We compute multiBLEU (Post, 2018) via comparing with multiple formal rewrites on XFORMAL .", "Freitag et al. (2020) shows that correlation with human judgments improves when considering multiple references for MT evaluation.", "Given that automatic evaluation of ST lacks standard evaluation practiceseven in cases when EN is consideredwe turn to human evaluation to reliably assess our baselines following the protocols of RT 18 .", "We sample a subset of 100 sentences from XFORMAL per language, evaluate outputs of 5 systems, and collect 5 judgments per instance.We open the task to all workers passing QC 2 in Table 3. We include inter-annotator agreement results in A.E. Formality We collect formality ratings for the original informal reference, the formal human rewrite, and the formal system outputs on a 7 point discrete scale of 3 to 3 , following Lahiri (2015) ( Very informal Informal Somewhat Informal Neutral Somewhat Formal Formal Very Formal ).", "Fluency We collect fluency ratings for the original informal reference, the formal human rewrite, and the formal system outputs on a discrete scale of 1 to 5 , following Heilman et al. (2014) ( Other Incomprehensible Somewhat Comprehensible Comprehensible Perfect ).", "Meaning Preservation We adopt the annotation scheme of Semantic Textual Similarity (Agirre et al., 2016): given the informal reference and formal human rewrite or the formal system outputs, Turkers rate the two sentences' similarity on a 1 to 6 scale ( Completely dissimilar Not equivalent but on same topic Not equivalent but share some details Roughly equivalent Mostly equivalent Completely equivalent ).", "Overall We collect overall judgments of the system outputs using relative ranking : given the informal reference and a formal human rewrite, workers are asked to rank system outputs in the order of their overall formality, taking into account both fluency and meaning preservation.", "An overall score is then computed for each model via averaging results across annotating instances.", "Table 7 shows automatic results for all models across the four dimensions as well as human ratings for selected top models.", "NMT -based model evalatuation Concretely, the RULE-BASED baselines are significantly ( p < 0 . 05 ) the best performing models in terms of meaning preservation across languages.", "This result is intuitive as the RULE-BASED models act at the surface level and are unlikely to change the informal sen-tence's meaning.", "The BACKTRANSLATE ensemble systems are the second-best performing models in terms of meaning preservation, while the ROUNDTRIP MT outputs diverge semantically from the informal sentences the most.", "Those results are consistent across languages and human/automatic evaluations.", "On the other hand, when we compare systems in terms of their formality , we observe the opposite pattern: the RULE-BASED and BACKTRANSLATE outputs are the most informal compared to the other ensemble NMT -based approaches across languages.", "Interestingly, the ROUND-TRIP MT outputs exhibit the largest formality shift for BR-PT and FR as measured by human evaluation.", "The trade-off between meaning preservation and formality among models was also observed in EN ( RT 18 ).", "Moreover, when we move to fluency , we notice similar results across systems.", "Specifically, human evaluation assigns almost all models an average score of > 4 , denoting that system outputs are comprehensible on average, with small differences between systems not being statistically significant.", "Notably, perplexity tells a different story: all system outputs are significantly better compared to the RULE-BASED systems across configurations and languages.", "This result denotes that perplexity might not be a reliable metric to measure fluency in this setting, as noticed in Mir et al. (2019) and Krishna et al. (2020).", "When it comes to the overall ranking of systems, we observe that the NMT -based ensembles are better than the RULEBASED baselines for BR-PT and FR , yet by a small margin as denoted by both multiBLEU and human evaluation.", "However, the corresponding results for IT denote that there is no clear win, and the NMT based ensembles still fail to surpass the naive RULEBASED models, yet by a small margin.", "Finally, all ensembles outperform the trivial COPY baseline.", "Table 8 presents examples of system outputs.", "As a side note, we followed the recommendation of Tikhonov and Yamshchikov (2018) to show the performance of ST models of individual runs and visualize trade-offs between metrics better.", "Unlike their work which found that reruns of the same model showed wide performance discrepancies, we found that most of our NMT -based models did not vary in performance on XFORMAL .", "The results can be visualized in A.M.", "Unsupervised model evaluation We also benchmark the unsupervised models but focus solely on automatic metrics since they lag behind their supervised counterparts.", "As shown in Table 7, when ( BR-PT ) INFORMAL n preciso pedir pois sei q ela vai vir atras!!", "using out-of-domain data (e.g., OpenSubtitles) for training, the models perform worse than their NMT counterparts across all three languages.", "The difference is most stark when considering selfBLEU and multiBLEU scores.", "However, given access to large in-domain corpora (e.g., L 26 Yahoo! French Answers) the gap between the two model classes closes with DLSM achieving a multiBLEU score of 42.1 compared to 48.3 for the best performing NMT model BACKTRANSLATE .", "This shows the promise of unsupervised methods, assuming a large amount of in-domain data, on multilingual ST tasks.", "Lexical differences of system outputs Finally, in Figure 3 we analyze the diversity of outputs by leveraging LeD scores resulting from pair-wise comparisons of different NMT systems.", "A larger LeD score denotes a larger difference between the lexical choices of the two systems under comparison.", "First, we observe that the ROUND-TRIP MT outputs have the smallest lexical overlap with the informal input sentences.", "However, when this observation is examined together with human evaluation results, we conclude that the large number of lexical edits happens at the cost of diverging semantically from the input sentences.", "Moreover, we observe that the average lexical differences within NMT -based systems are small.", "This indicates that different systems perform similar edit operations that do not deviate a lot from the input sentence in terms of their lexical choices.", "This is unfortunate given that multilingual F o ST requires systems to perform more phrase-based operations, as shown in the analysis in 4.", "Evaluation Metric While evaluating evaluation metrics is not a goal of this work (though the data can be used for that purpose), we observe that the top models identified by the automatic metrics generally align with the top models identified by humans.", "While promising, further work is required to confirm if the automatic measures really do correlate with human judgments.", "This work extends the task of formality style transfer to a multilingual setting.", "Specifically, we contribute XFORMAL , an evaluation testbed consisting of informal sentences and multiple formal rewrites spanning three languages: BR-PT , FR , and IT .", "As in Rao and Tetreault (2018) Turkers can be effective in creating high quality ST corpora.", "In contrast to the aforementioned EN corpus, we find that the rewrites in XFORMAL tend to be more diverse, making it a more challenging task.", "Additionally, inspired by work on cross-lingual transfer and EN F o ST , we benchmark several methods and perform automatic and human evaluations on their outputs.", "We found that NMT -based ensembles are the best performing models for FR and BR-PT a result consistent with EN however, they perform comparably to a naive RULE-BASED baseline for IT .", "To further facilitate reproducibility of our evaluations and corpus creation processes, as well as drive future work, we will release our scripts, rule-based baselines, source data, and annotation templates, on top of the release of XFORMAL .", "Our results open several avenues for future work in terms of benchmarking and evaluating F o ST in a more language inclusive direction.", "Notably, current supervised and unsupervised approaches for EN F o ST rely on parallel in-domain datawith the latter treating the parallel set as two unpaired corporathat are not available in most languages.", "We suggest that benchmarking F o ST models in multilingual settings will help understand their ability to generalize and lead to safer conclusions when comparing approaches.", "At the same time, multilingual F o ST calls for more language-inclusive consideration for automatic evaluation metrics.", "Model-based approaches have been recently proposed for evaluating different aspects of ST .", "However, most of them rely heavily on English resources or pretrained models.", "How those methods can be extended to multilingual settings and how we evaluate their performance remain open questions.", "We thank Sudha Rao for providing references and materials of the GYAFC dataset, Chris Callison-Burch and Courtney Napoles for discussions on MTurk annotations, Svebor Karaman for helping with data collection, our colleagues at Dataminr, and the NAACL reviewers for their helpful and constructive comments.", "Finally, we address ethical considerations for dataset papers given that our work proposes a new corpus XFORMAL .", "We reply to the relevant questions posed in the NAACL 2021 Ethics FAQ .", "16 7.1 Dataset Rights The underlying data for our dataset as well as training our F o ST models and formality classifiers are from Yahoo! Answers L6 dataset.", "We were granted written permission by Yahoo (now Verizon) to make the resulting dataset public for academic use.", "Turkers are paid over 10 USD an hour.", "We targeted a rate higher than the US national minimum wage of 7 .", "50 USD given discussions with other researchers who use crowdsourcing.", "We include more information on collection procedures in 3.", "This question is not applicable for our work.", "We follow Bender and Friedman (2018) and Gebru et al. (2018) and report characteristics in 3 and 4." ]
[ "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "result", "method", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "other", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "result", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method" ]
[ "In this paper, we study the problem of parsing structured knowledge graphs from textual descriptions.", "In particular, we consider the scene graph representation (Johnson et al., 2015) that considers objects together with their attributes and relations: this representation has been proved useful across a variety of vision and language applications.", "We begin by introducing an alternative but equivalent edge-centric view of scene graphs that connect to dependency parses.", "Together with a careful redesign of label and action space, we combine the two-stage pipeline used in prior work (generic dependency parsing followed by simple post-processing) into one, enabling end-to-end training.", "The scene graphs generated by our learned neural dependency parser achieve an F-score similarity of 49.67% to ground truth graphs on our evaluation set, surpassing best previous approaches by 5%.", "We further demonstrate the effectiveness of our learned parser on image retrieval applications.", "1 1 Introduction Recent years have witnessed the rise of interest in many tasks at the intersection of computer vision and natural language processing, including semantic image retrieval (Johnson et al., 2015; Vendrov et al., 2015), image captioning (Mao et al., 2014; Karpathy and Li, 2015; Donahue et al., 2015; Liu et al., 2017b), visual question answering (Antol et al., 2015; Zhu et al., 2016; Andreas et al., 2016), and referring expressions (Hu et al., 2016; Mao et al., 2016; Liu et al., 2017a).", "The pursuit for these tasks is in line with people's desire for high level understanding of visual content, in particular, using textual descriptions or questions to help understand or express images and scenes.", "What is shared among all these tasks is the need for a common representation to establish connection between the two different modalities.", "The majority of recent works handle the vision side with convolutional neural networks, and the language side with recurrent neural networks (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) or word embeddings (Mikolov et al., 2013; Pennington et al., 2014).", "In either case, neural networks map original sources into a semantically meaningful (Donahue et al., 2014; Mikolov et al., 2013) vector representation that can be aligned through end-to-end training (Frome et al., 2013).", "This suggests that the vector embedding space is an appropriate choice as the common representation connecting different modalities (see e.g. Kaiser et al. (2017)).", "While the dense vector representation yields impressive performance, it has an unfortunate limitation of being less intuitive and hard to interpret.", "Scene graphs (Johnson et al., 2015), on the other hand, proposed a type of directed graph to encode information in terms of objects, attributes of objects, and relationships between objects (see Figure 1 for visualization).", "This is a more structured and explainable way of expressing the knowledge from either modality, and is able to serve as an alternative form of common representation.", "In fact, the value of scene graph representation has already been proven in a wide range of visual tasks, including semantic image retrieval (Johnson et al., 2015), caption quality evaluation (Anderson et al., 2016), etc.", "In this paper, we focus on scene graph generation from textual descriptions.", "Previous attempts at this problem (Schuster et al., 2015; Anderson et al., 2016) follow the same spirit.", "They first use a dependency parser to obtain the dependency relationship for all words in a sentence, and then use either a rule-based or a learned classifier as post-processing to generate the scene graph.", "However, the rule-based classifier cannot 397 a young boy in front of a soccer goal a soccer ball in the air a man standing with hands behind back a woman wearing a purple shirt a young boy wearing a black uniform the roof is brown the ball is white a soccer ball on the ground a man wearing a red and white shirt people behind the net goal keeper watching the ball a white ball on the ground goal keeper is wearing gloves a kid is sitting on the ground the man is standing the uniform is black a red and black backpack sitting on the ground trees outside the fence blue and white soccer ball young boy wear uniform black backpack sit on ground black red Figure 1: Each image in the Visual Genome (Krishna et al., 2017) dataset contains tens of region descriptions and the region scene graphs associated with them.", "learn from data, and the learned classifier is rather simple with hand-engineered features.", "In addition, the dependency parser was trained on linguistics data to produce complete dependency trees, some parts of which may be redundant and hence confuse the scene graph generation process.", "Therefore, our model abandons the two-stage pipeline, and uses a single, customized dependency parser instead.", "The customization is necessary for two reasons.", "First is the difference in label space.", "Standard dependency parsing has tens of edge labels to represent rich relationships between words in a sentence, but in scene graphs we are only interested in three types, namely objects, attributes, and relations.", "Second is whether every word needs a head.", "In some sense, the scene graph represents the skeleton of the sentence, which suggests that empty words are unlikely to be included in the scene graph.", "We argue that in scene graph generation, it is unnecessary to require a parent word for every single word.", "dependency parser implementation (Kiperwasser and Goldberg, 2016) that is among the state-of-the-art.", "We show that our carefully customized dependency parser is able to generate high quality scene graphs by learning from data.", "Specifically, we use the Visual Genome dataset (Krishna et al., 2017), which provides rich amounts of region description region graph pairs.", "We first align nodes in region graphs with words in the region descriptions using simple rules, and then use this alignment to train our customized dependency parser.", "We evaluate our parser by computing the F-score between the parsed scene graphs and ground truth scene graphs.", "We also apply our approach to image retrieval to show its effectiveness.", "The scene graph representation was proposed in Johnson et al. (2015) as a way to represent the rich, structured knowledge within an image.", "The nodes in a scene graph represent either an object, an attribute for an object, or a relationship between two objects.", "The edges depict the connection and association between two nodes.", "This representation is later adopted in the Visual Genome dataset (Kr-ishna et al., 2017), where a large number of scene graphs are annotated through crowd-sourcing.", "The scene graph representation has been proved useful in various problems including semantic image retrieval (Johnson et al., 2015), visual question answering (Teney et al., 2016), 3D scene synthesis (Chang et al., 2014), and visual relationship detection (Lu et al., 2016).", "Excluding Johnson et al. (2015) which used ground truth, scene graphs are obtained either from images (Dai et al., 2017; Xu et al., 2017; Li et al., 2017) or from textual descriptions (Schuster et al., 2015; Anderson et al., 2016).", "In this paper we focus on the latter.", "In particular, parsed scene graphs are used in Schuster et al. (2015) for image retrieval.", "We show that with our more accurate scene graph parser, performance on this task can be further improved.", "The goal of dependency parsing (Kubler et al., 2009) is to assign a parent word to every word in a sentence, and every such connection is associated with a label.", "Dependency parsing is particularly suitable for scene graph generation because it directly models the relationship between individual 398 words without introducing extra nonterminals.", "In fact, all previous work (Schuster et al., 2015; Anderson et al., 2016) on scene graph generation run dependency parsing on the textual description as a first step, followed by either heuristic rules or simple classifiers.", "Instead of running two separate stages, our work proposed to use a single dependency parser that is end-to-end.", "In other words, our customized dependency parser generates the scene graph in an online fashion as it reads the textual description once from left to right.", "In recent years, dependency parsing with neural network features (Chen and Manning, 2014; Dyer et al., 2015; Cross and Huang, 2016; Kiperwasser and Goldberg, 2016; Dozat and Manning, 2016; Shi et al., 2017) has shown impressive performance.", "In particular, Kiperwasser and Goldberg (2016) used bidirectional LSTMs to generate features for individual words, which are then used to predict parsing actions.", "We base our model on Kiperwasser and Goldberg (2016) for both its simplicity and good performance.", "Apart from dependency parsing, Abstract Meaning Representation (AMR) parsing (Flani-gan et al., 2014; Werling et al., 2015; Wang et al., 2015; Konstas et al., 2017) may also benefit scene graph generation.", "However, as first pointed out in Anderson et al. (2016), the use of dependency trees still appears to be a common theme in the literature, and we leave the exploration of AMR parsing for scene graph generation as future work.", "More broadly, our task also relates to entity and relation extraction, e.g. Katiyar and Cardie (2017), but there object attributes are not handled.", "Neural module networks (Andreas et al., 2016) also use dependency parses, but they translate questions into a series of actions, whereas we parse descriptions into their graph form.", "Finally, Krishnamurthy and Kollar (2013) connected parsing and grounding by training the parser in a weakly supervised fashion.", "In this section, we begin by reviewing the scene graph representation, and show how its nodes and edges relate to the words and arcs in dependency parsing.", "We then describe simple yet reliable rules to align nodes in scene graphs with words in textual descriptions, such that customized dependency parsing, described in the next section, may be trained and applied.", "There are three types of nodes in a scene graph: object, attribute, and relation.", "Let O be the set of object classes, A be the set of attribute types, and R be the set of relation types.", "Given a sentence s , our goal in this paper is to parse s into a scene graph: G ( s ) = h O ( s ) , A ( s ) , R ( s ) i (1) where O ( s ) = { o 1 ( s ) , . . . , o m ( s ) } , o i ( s ) O is the set of object instances mentioned in s , A ( s ) O ( s ) A is the set of attributes associated with object instances, and R ( s ) O ( s ) R O ( s ) is the set of relations between object instances.", "G ( s ) is a graph because we can first create an object node for every element in O ( s ) ; then for every ( o, a ) pair in A ( s ) , we create an attribute node and add an unlabeled edge o a ; finally for every ( o 1 , r, o 2 ) triplet in R ( s ) , we create a relation node and add two unlabeled edges o 1 r and r o 2 .", "The resulting directed graph exactly encodes information in G ( s ) .", "We call this the node-centric graph representation of a scene graph.", "We realize that a scene graph can be equivalently represented by no longer distinguishing between the three types of nodes, yet assigning labels to the edges instead.", "Concretely, this means there is now only one type of node, but we assign a ATTR label for every o a edge, a SUBJ label for every o 1 r edge, and a OBJT label for every r o 2 edge.", "We call this the edge-centric graph representation of a scene graph.", "We can now establish a connection between scene graphs and dependency trees.", "Here we only consider scene graphs that are acyclic 2 .", "The edge-centric view of a scene graph is very similar to a dependency tree: they are both directed acyclic graphs where the edges/arcs have labels.", "The difference is that in a scene graph, the nodes are the objects/attributes/relations and the edges have label space { ATTR , SUBJ , OBJT } , whereas in a dependency tree, the nodes are individual words in a sentence and the edges have a much larger label space.", "We have shown the connection between nodes in scene graphs and words in dependency parsing.", "With alignment between nodes in scene 2 In Visual Genome, only 4.8% region graphs have cyclic structures.", "graphs and words in the textual description, scene graph generation and dependency parsing becomes equivalent: we can construct the generated scene graph from the set of labeled edges returned by the dependency parser.", "Unfortunately, such alignment is not provided between the region graphs and region descriptions in the Visual Genome (Krishna et al., 2017) dataset.", "Here we describe how we use simple yet reliable rules to do sentence-graph (word-node) alignment.", "There are two strategies that we could use in deciding whether to align a scene graph node d (whose label space is O A R ) with a word/phrase w in the sentence: Word-by-word match (WBW): d w only when d 's label and w match word-for-word.", "Synonym match (SYN) 3 : d w when the wordnet synonyms of d 's label contain w .", "Obviously WBW is a more conservative strategy than SYN.", "We propose to use two cycles and each cycle further consists of three steps, where we try to align objects, attributes, relations in that order.", "The pseudocode for the first cycle is in Algorithm 1.", "The second cycle repeats line 4-15 immediately afterwards, except that in line 6 we also allow SYN.", "Intuitively, in the first cycle we use a conservative strategy to find safe objects, and then scan for their attributes and relations.", "In the second cycle we relax and allow synonyms in aligning object nodes, also followed by the alignment of attribute and relation nodes.", "The ablation study of the alignment procedure is reported in the experimental section.", "In the previous section, we have established the connection between scene graph generation and dependency parsing, which assigns a parent word for every word in a sentence, as well as a label for this directed arc.", "We start by describing our base dependency parsing model, which is neural network based and performs among the state-of-the-art.", "We then show why and how we do customization, such that scene graph generation is achieved with a single, end-to-end model.", "We base our model on the transition-based parser of Kiperwasser and Goldberg (2016).", "Here we describe its key components: the arc-hybrid system that defines the transition actions, the neural architecture for feature extractor and scoring function, and the loss function.", "The Arc-Hybrid System In the arc-hybrid system, a configuration consists of a stack , a buffer , and a set T of dependency arcs.", "Given a sentence s = w 1 , . . . , w n , the system is initialized with an empty stack , an empty arc set T , and = 1 , . . . , n, ROOT , where ROOT is a special index.", "The system terminates when is empty and contains only ROOT .", "The dependency tree is given by the arc set T upon termination.", "The arc-hybrid system allows three transition actions, SHIFT , LEFT l , RIGHT l , described in Table 1.", "The SHIFT transition moves the first element of the buffer to the stack.", "The LEFT ( l ) transition yields an arc from the first element of the buffer to the top element of the stack, and then removes the top element from the stack.", "The 400 Stack t Buffer t Arc set T t Action Stack t +1 Buffer t +1 Arc set T t +1 b 0 | T SHIFT | b 0 T | s 1 | s 0 b 0 | T LEFT ( l ) | s 1 b 0 | T { ( b 0 , s 0 , l ) } | s 1 | s 0 T RIGHT ( l ) | s 1 T { ( s 1 , s 0 , l ) } | s 0 T REDUCE T Table 1: Transition actions under the arc-hybrid system.", "RIGHT ( l ) transition yields an arc from the second top element of the stack to the top element of the stack, and then also removes the top element from the stack.", "The following paragraphs describe how to select the correct transition action (and label l ) in each step in order to generate a correct dependency tree.", "BiLSTM Feature Extractor Let the word embeddings of a sentence s be w 1 , . . . , w n .", "An LSTM cell is a parameterized function that takes as input w t , and updates its hidden states: LSTM cell : ( w t , h t 1 ) h t (2) As a result, an LSTM network, which simply applies the LSTM cell t times, is a parameterized function mapping a sequence of input vectors w 1: t to a sequence of output vectors h 1: t .", "In our notation, we drop the intermediate vectors h 1: t 1 and let LSTM ( w 1: t ) represent h t .", "A bidirectional LSTM, or BiLSTM for short, consists of two LSTMs: LSTMF which reads the input sequence in the original order, and LSTMB which reads it in reverse.", "Then BILSTM ( w 1: n , i ) = LSTMF ( w 1: i ) LSTMB ( w n : i ) (3) where denotes concatenation.", "Intuitively, the forward LSTM encodes information from the left side of the i -th word and the backward LSTM encodes information to its right, such that the vector v i = BILSTM ( w 1: n , i ) has the full sentence as context.", "When predicting the transition action, the feature function ( c ) that summarizes the current configuration c = ( , , T ) is simply the concatenated BiLSTM vectors of the top three elements in the stack and the first element in the buffer: ( c ) = v s 2 v s 1 v s 0 v b 0 (4) MLP Scoring Function The score of transition action y under the current configuration c is determined by a multi-layer perceptron with one hidden layer: f ( c, y ) = MLP ( ( c ))[ y ] (5) where MLP ( x ) = W 2 tanh( W 1 x + b 1 ) + b 2 (6) Hinge Loss Function The training objective is to raise the scores of correct transitions above scores of incorrect ones.", "Therefore, at each step, we use a hinge loss defined as: L = max(0 , 1 max y + Y + f ( c, y + ) + max y Y \\ Y + f ( c, y )) (7) where Y is the set of possible transitions and Y + is the set of correct transitions at the current step.", "In each training step, the parser scores all possible transitions using Eqn.", "5, incurs a loss using Eqn.", "7, selects a following transition, and updates the configuration.", "Losses at individual steps are summed throughout the parsing of a sentence, and then parameters are updated using backpropagation.", "In test time, we simply choose the transition action that yields the highest score at each step.", "In order to generate scene graphs with dependency parsing, modification is necessary for at least two reasons.", "First, we need to redefine the label space of arcs so as to reflect the edge-centric representation of a scene graph.", "Second, not every word in the sentence will be (part of) a node in the scene graph (see Figure 2 for an example).", "In other words, some words in the sentence may not have a parent word, which violates the dependency parsing setting.", "We tackle these two challenges by redesigning the edge labels and expanding the set of transition actions.", "Redesigning Edge Labels We define a total of five edge labels, so as to faithfully bridge the edge-centric view of scene graphs with dependency parsing models:", "CONT : This label is created for nodes whose label is a phrase.", "For example, the phrase in front of is a single relation node in the scene graph.", "By introducing the CONT label, we expect the parsing result to be either in CONT front CONT of (8) or in CONT front CONT of (9) where the direction of the arcs (left or right) is predefined by hand.", "The leftmost word under the right arc rule or the rightmost word under the left arc rule is called the head of the phrase.", "A single-word node does not need this CONT label, and the head is itself.", "ATTR : The arc label from the head of an object node to the head of an attribute node.", "SUBJ : The arc label from the head of an object node (subject) to the head of a relation node.", "OBJT : The arc label from the head of a relation node to the head of an object node (ob-ject).", "BEGN : The arc label from the ROOT index to all heads of object nodes without a parent.", "Expanding Transition Actions With the three transition actions SHIFT , LEFT ( l ), RIGHT ( l ), we only drop an element (from the top of the stack) after it has already been associated with an arc.", "This design ensures that an arc is associated with every word.", "However, in our setting for scene graph generation, there may be no arc for some of the words, especially empty words.", "Our solution is to augment the action set with a REDUCE action, that pops the stack without adding to the arc set (see Table 1).", "This action is often used in other transition-based dependency parsing systems (e.g. arc-eager (Nivre, 2004)).", "More recently, Hershcovich et al. (2017) and Buys and Blunsom (2017) also included this action when parsing sentences to graph structures.", "We still minimize the loss function defined in Eqn.", "7, except that now | Y | increases from 3 to 4.", "During training, we impose the oracle to select the REDUCE action when it is in Y + .", "In terms of loss function, we increment by 1 the loss incurred by the other 3 transition actions if REDUCE incurs zero loss.", "We train and evaluate our scene graph parsing model on (a subset of) the Visual Genome (Kr-ishna et al., 2017) dataset.", "Each image in Visual Genome contains a number of regions, and each region is annotated with both a region description and a region scene graph.", "Our training set is the intersection of Visual Genome and MS COCO (Lin et al., 2014) train2014 set, which contains a total of 34027 images/ 1070145 regions.", "We evaluate on the intersection of Visual Genome and MS COCO val2014 set, which contains a total of 17471 images/ 547795 regions.", "In our experiments, the number of hidden units in BiLSTM is 256; the number of layers in BiLSTM is 2; the word embedding dimension is 200; the number of hidden units in MLP is 100.", "We use fixed learning rate 0.001 and Adam optimizer (Kingma and Ba, 2014) with epsilon 0.01.", "Training usually converges within 4 epochs.", "We will release our code and trained model upon acceptance.", "scene graph parsing.", "Specifically, for every region, we parse its description using a parser (e.g. the one used in SPICE or our customized dependency parser), and then calculate the F-score between the parsed graph and the ground truth region graph (see Section 3.2 of Anderson et al. (2016) for more details).", "Note that when SPICE calculates the F-score, a node in one graph could be matched to several nodes in the other, which is problematic.", "We fix this and enforce one-to-one matching when calculating the F-score.", "Finally, we report the average F-score across all regions.", "an average F-score of 49.67%, which significantly outperforms the parser used in SPICE by 5 percent.", "This result shows that our customized dependency parser is very effective at learning from data, and generates more accurate scene graphs than the best previous approach.", "Ablation Studies First, we study how the sentence-graph alignment procedure affects the final performance.", "Recall that our procedure involves two cycles, each with three steps.", "Of the six steps, synonym match (SYN) is only not used in the first step.", "We tried two more settings, where SYN is either used in all six steps or none of the six steps.", "We can see from Table 2 that the final 403 Development set Test set R@5 R@10 Med. rank R@5 R@10 Med. rank (Schuster et al., 2015) 33.82% 45.58% 6 34.96% 45.68% 5 Ours 36.69% 49.41% 4 36.70% 49.37% 5 Table 3: Image retrieval results.", "Second, we study whether changing the direction of CONT arcs from pointing left to pointing right will make much difference.", "Table 2 shows that the two choices give very similar performance, suggesting that our dependency parser is robust to this design choice.", "Finally, we report the oracle score, which is the similarity between the aligned graphs that we use during training and the ground truth graphs.", "The F-score is relatively high at 69.85%.", "This shows that improving the parser (about 20% margin) and improving the sentence-graph alignment (about 30% margin) are both promising directions for future research.", "Qualitative Examples We provide one parsing example in Figure 2 and Figure 3.", "This is a sentence that is relatively simple, and the underlying scene graph includes two object nodes, one attribute node, and one compound word relation node.", "In parsing this sentence, all four actions listed in Table 1 are used (see Figure", "3) to produce the edge-centric scene graph (bottom left of Figure 2), which is then trivially converted to the node-centric scene graph (bottom right of Figure 2).", "We test if the advantage of our parser can be propagated to computer vision tasks, such as image retrieval.", "We directly compare our parser with the Stanford Scene Graph Parser (Schuster et al., 2015) on the development set and test set of the image retrieval dataset used in Schuster et al. (2015) (not Visual Genome).", "For every region in an image, there is a human-annotated region description and region scene graph.", "The queries are the region descriptions.", "If the region graph corresponding to the query is a subgraph of the complete graph of another image, then that image is added to the ground truth set for this query.", "All these are strictly following Schuster et al. (2015).", "However, since we did not obtain nor reproduce the CRF model used in Johnson et al. (2015) and Schuster et al. (2015), we used F-score similarity instead of the likelihood of the maximum a posteriori CRF solution when ranking the images based on the region descriptions.", "Therefore the numbers we report in Table 3 are not directly comparable with those reported in Schuster et al. (2015).", "Our parser delivers better retrieval performance across all three evaluation metrics: recall@5, re-call@10, and median rank.", "We also notice that the numbers in our retrieval setting are higher than those (even with oracle) in Schuster et al. (2015)'s retrieval setting.", "This strongly suggests that generating accurate scene graphs from images is a very promising research direction in image retrieval, and grounding parsed scene graphs to bounding box proposals without considering visual attributes/relationships (Johnson et al., 2015) is suboptimal.", "In this paper, we offer a new perspective and solution to the task of parsing scene graphs from textual descriptions.", "We begin by moving the la-bels/types from the nodes to the edges and introducing the edge-centric view of scene graphs.", "We further show that the gap between edge-centric scene graphs and dependency parses can be filled with a careful redesign of label and action space.", "This motivates us to train a single, customized, end-to-end neural dependency parser for this task, as opposed to prior approaches that used generic dependency parsing followed by heuristics or simple classifier.", "We directly train our parser on a subset of Visual Genome (Krishna et al., 2017), without transferring any knowledge from Penn Tree-bank (Marcus et al., 1993) as previous works did.", "The quality of our trained parser is validated in terms of both SPICE similarity to the ground truth graphs and recall rate/median rank when performing image retrieval.", "We hope our paper can lead to more thoughts on the creative uses and extensions of existing NLP tools to tasks and datasets in other domains.", "In the future, we plan to tackle more computer vision tasks with this improved scene graph parsing technique in hand, such as image region grounding.", "We also plan to investigate parsing scene graph with cyclic structures, as well as whether/how the image information can help boost parsing quality.", "The majority of this work was done when YSW and XZ were visiting Johns Hopkins University.", "We thank Peter Anderson, Sebastian Schuster, Ranjay Krishna, Tsung-Yi Lin for comments and help regarding the experiments.", "We also thank Tianze Shi, Dingquan Wang, Chu-Cheng Lin for discussion and feedback on the draft.", "This work was sponsored by the National Science Foundation Center for Brains, Minds, and Machines NSF CCF-1231216.", "CL also acknowledges an award from Snap Inc." ]
[ "method", "method", "abstain", "method", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "other", "other", "objective", "method", "other", "other", "abstain", "other", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "method", "method", "abstain", "method", "objective", "other", "other", "other", "other", "other" ]
[ "In this paper, we introduce UNIFIED M2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup.", "The model is trained to handle four tasks: detecting news bias , clickbait , fake news and verifying rumors .", "By grouping these tasks together, UNIFIED M2 learns a richer representation of misinformation, which leads to state-of-the-art or comparable performance across all tasks.", "Furthermore, we demonstrate that UNIFIED M2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and model's generalizability to unseen events.", "On any given day, 2 .", "5 quintillion bytes of information are created on the Internet, a figure that is only expected to increase in the coming years (Marr, 2018).", "The internet has allowed information to spread rapidly, and studies have found that misinformation spreads quicker and more broadly than true information (Vosoughi et al., 2018).", "It is thus paramount for misinformation detection approaches to be able to adapt to new, emerging problems in real time, without waiting for thousands of training examples to be collected.", "In other words, the generalizability of such systems is essential.", "Misinformation detection is not well-studied from a generalizability standpoint.", "Misinformation can manifest in different forms and domains, i.e., fake news, clickbait, and false rumors, and previous literature has mostly focused on building specialized models for a single domain (Rubin et al., 2016; Omidvar et al., 2018; Ma et al., 2018).", "(Even prior literature on multi-tasking for misinformation (Kochkina et al., 2018) focuses more on Work partially done while interning at Facebook AI.", "tasks.) However, though these domains may differ in format (long articles vs. short headlines and tweets) and exact objective (is this fake vs. is this clickbait), they have the same ultimate goal of deceiving their readers.", "As a result, their content often exhibits similar linguistic characteristics, such as using a sensational style to incite curiosity or strong emotional responses from readers.", "Furthermore, models trained on multiple tasks are more robust and less prone to overfitting to spurious domain-specific correlations.", "Thus, unifying various domains of misinformation allows us to build a generalizable model that performs well across multiple domains/formats of misinformation.", "In this work, we propose Unified Misinfo Model (UNIFIED M2), a misinformation detection model that uses multi-task learning (Caruana, 1997; Maurer et al., 2016; Zhang and Yang, 2017) to train on different domains of misinformation.", "Through a comprehensive series of empirical evaluations, we demonstrate that our approach is effective on all tasks that we train on, improving F 1 in some cases by an absolute 8%.", "Moreover, we conduct ablation studies to more precisely characterize how such positive transfer is attained.", "Beyond improvements on seen datasets, we examine the gen-Task DatasetName Granuarity Labels (Positive/Negative) Dataset Size PositiveClassSize NEWSBIASBASIL sentence contains-bias/no-bias 7,984 1,727 FAKENEWS Webis article fake/true 1,627 363 RUMORPHEME tweet fake/true 1,705 1,067 CLICKBAIT Clickbait headline is-clickbait/not-clickbait 19,538 4,761 Table 1: Summary of the four misinformation datasets we train on with UNIFIED M2.", "eralizability of our proposed approach to unseen tasks/datasets and events.", "This is highly applicable to real-world use cases, where obtaining new misinformation labels is costly and systems often wish to take down misinformation in real time.", "Our experimental results indicate that our unified representation has better generalization ability over other baselines.", "Our proposed model architecture is a hard-parameter sharing multi-task learning model (Ruder, 2017), where a single shared RoBERTa (Liu et al., 2019b) encoder is used across all tasks.", "RoBERTa is a Transformer encoder pretrained with a masked-language-modeling objective on English Wikipedia and news articles (CC-NEWS ), among other data.", "We additionally append task-specific multi-layer perceptron (MLP) classification heads following the shared encoder.", "During multi-task training, the model sees examples from all datasets, and we jointly train the shared encoder with all task-specific heads.", "During inference time, we only use the classification head relevant to the inference-time task.", "The overall architecture of the model is shown in Figure 1.", "Our model training process consists of two steps.", "The first step is multi-task training of the shared UNIFIED M2 encoder to learn a general misinformation representation.", "We jointly optimize for all tasks t 1 t T by optimizing the sum of their task-specific losses L t , where L t refers to the cross-entropy loss of the task-specific MLP classifiers.", "Our overall loss is defined as L multi = (cid:80) t = t 1 t TL t .", "Note that since the dataset sizes are different, we over-sample from the smaller datasets to make the training examples roughly equal.", "The second step is to fine-tune each task-specific heads again, similarly to the MT-DNN by Liu et al. (2019a), to obtain the results reported in Table 2 and Table", "4. 3 Experiment Here, we provide experimental details (dataset, baselines, experimental setups) and results that empirically show the success of the proposed UNIFIED M2 model.", "Table 1 lists the four misinformation tasks/datasets we use to train UNIFIED M2.", "They span various granularities and domains (articles, sentences, headlines and tweets) as well as various objectives (classifying veracity, bias and clickbaity-ness).", "NEWSBIASA task to classify whether a given sentence from a news article contains political bias or not.", "We adapt the BASIL (Fan et al., 2019) dataset, which has bias-span annotations for lexical and informational bias within news articles.", "Using this dataset, we also include two auxiliary tasks related to political-bias detection: 1) bias type classification given a biased sentence, the type of the bias (lexical vs informational) is classified; and 2) polarity detection given a biased sentence, its polarity (positive, negative, neutral) is determined.", "FAKENEWS An article-level fake news detection task that leverages the Webis (Potthast et al., 2018) dataset annotated by professional journalists.", "RUMORA task to verify the veracity of a rumor tweet.", "The PHEME dataset (Zubiaga et al., 2016), which contains rumor tweets with their corresponding reply tweets (social engagement data), is used for this task.", "We only use the text of the source rumor tweet since we focus on learning a good representation for misinformation text .", "Originally, Tasks SoTA models RoBERTa UNIFIED M2 Acc F1 Acc F1 Acc F1 NEWSBIAS N/A 32.0% | 43.0% 72.8% 65.5% 81.0% 70.2% FAKENEWS 58.0% 46.0% 84.3% 74.9% 85.4% 73.9% RUMOR 81.0% 80.0% 87.6% 86.9% 92.9% 92.5% CLICKBAIT 83.0% 57.0% 84.4% 77.4% 86.3% 78.7% Table 2: Results of single-task SoTA papers, the single-task RoBERTa baseline, and our UNIFIED M2 on all misinformation tasks.", "there were three class labels (true, false, unveri-fied); however, following other literature (Derczyn-ski et al., 2017; Wu et al., 2019), we report the binary version, excluding the unverified label.", "CLICKBAITA task to detect the clickbaity-ness of news headlines, which refers to sensational headlines that might deceive and mislead readers.", "For this task, we use the dataset from the Clickbait Challenge.", "1 3.2 Baseline Models State-of-the-Art Models For each misinformation task, we report and compare our approach to the SoTA models from Fan et al. (2019) for NEWSBIAS , 2 Potthast et al. (2018) for FAKENEWS , Wu et al. (2019) for RUMOR and Omidvar et al. (2018) for CLICKBAIT .", "RoBERTa-based Baselines In addition to each task's published SoTA model, we create RoBERTa-based models by fine-tuning RoBERTa to each individual task.", "Training Details We ran all our experiments for 3 times with different shots, and report the average.", "Our UNIFIED M2 model is based on RoBERTa-large model which has 355M parameters.", "We used the Adam optimizer (Kingma and Ba, 2014) with a mini-batch size of 32 .", "The learning rate was set to 5e-6 with linear learning rate de-cay.", "The maximum epoch count was 15, with early stopping patience set to", "5. The maximum sequence length of input was set to 128 .", "These parameters 1 https://www.clickbait-challenge.org/.", "There are two versions of the labeled dataset, but we only use the larger one.", "2 They report bias-detection performance separately on the lexical-bias vs. no-bias setting and informational-bias vs. no-bias setting.", "In our experiments, we treat both lexicalbias and informational-bias to be contains-bias class, and conduct one unified experiment.", "were obtained by performing grid-search over our validation loss.", "We search within the following hyper-parameter bounds: LR = { 5 e 5 , 5 e 6 , 5 e 7 } , batch = { 16 , 32 } .", "Training Details for few-shot experiments We did not do any parameter searching for these few-shot experiments.", "We kept all the training details and parameters the same to the training details that are state above.", "Computing Infrastructure We ran all experiments with 1 NVIDIA TESLA V100 GPU with 32 GB of memory.", "Table 2 presents the results of our proposed unified model, UNIFIED M2, along with the two groups of baseline models.", "UNIFIED M2 achieves better or comparable results over both baselines for all four misinformation tasks.", "The improvement is especially prominent on the NEWSBIAS and RUMOR tasks, where we see an 8% and 5% improvement in accuracy, respectively.", "We conduct an ablation study to better understand how other tasks help in our multitask framework.", "Namely, how well do more similar vs. more different kinds of task transfer to each other?", "Specifically, we use the RUMOR dataset as a case study 3 .", "We train on multiple task combinations and evaluate their performance on RUMOR .", "Results are shown in Table 3.", "Note that adding FAKENEWS alone to single-task RoBERTa, or NEWSBIAS , actually hurts performance, indicating that multi-task learning is not simply a matter of data augmentation.", "We hypothesize that the drop is due to FAKENEWS being the least similar in format and style to RUMOR .", "Qualitatively, we compare examples from FAKENEWS and CLICKBAIT (the most helpful dataset) to RUMOR .", "Examples from FAKENEWS are long documents with a mix of formal and sensational styles, whereas CLICKBAIT contains short, sensational sentences.", "However, as the model is trained on more datasets, adding the less similar FAKENEWS task actually improves overall performance ( 90 . 5 92 . 5 F1 in three datasets), despite hurting the model trained on RUMOR only ( 86 . 9 78 . 7 F1).", "We hypothesize this is due, in part, to including more diverse sources of data, which improves the robustness of the model to different types of misinformation.", "New types, domains, and subjects of misinformation arise frequently.", "Promptly responding to these new sources is challenging, as they can spread widely before there is time to collect sufficient task-specific training examples.", "For instance, the rapid spread of COVID-19 was accompanied by equally fast spread of large quantities of misinformation (Joszt, 2020; Kouzy et al., 2020).", "Therefore, we carry out experiments to evaluate the generalization ability of UNIFIED M2 representation to unseen misinformation", "(i) tasks/datasets and", "(ii) events.", "The first experiment is about fast adaption ability (few-shot training) to handle a new task/dataset, whereas the second experiment is about the model's ability to perform well on events unseen during training.", "Dataset We evaluate using the following four unseen datasets: PROPAGANDA (Da San Martino et al., 2019), which contains 21,230 propaganda and non-propaganda sentences, with the propaganda sentences annotated by fine-grained propaganda technique labels, such as Name calling and Appeal to fear; POLITIFACT (Shu et al., 2019), which contains 91 true and 91 fake news articles collected from PolitiFact's fact-checking platform; BUZZFEED (Shu et al., 2019), which contains 120 true and 120 fake news headlines collected from BuzzFeed's fact-checking platform; and COVIDTWITTER (Alam et al., 2020), which contains 504 COVID-19-related tweets.", "For our experiment, we use two of the annotations: 1) Twitter Check-worthiness : does the tweet contain a verifi-able factual claim?", "2) Twitter False Claim : does the tweet contain false information?", "Few-shot Experiments We compare the few-shot performance of UNIFIED M2 against off-the-shelf RoBERTa and single-task RoBERTa.", "For each unseen dataset, a new MLP classification head is trained on top of the RoBERTa encoder, in a few-shot manner.", "Given N d to be the size of the given dataset d , we train the few-shot classifiers with k randomly selected samples and evaluate on the remaining N k samples.", "We test with k = 10 , 25 , 50 .", "Note that for single-task RoBERTa, we report the average performance across the four Model Acc F1 SoTA'19 (Li et al., 2019) 48.30% 41.80% SoTA'20 (Yu et al., 2020) 39.60% 46.60% Vanilla 47.07% 33.90% UNIFIED M2 64.74% 44.74% Table 5: Average acc and macro-F1 scores from leave-one-event-out cross-validation setup for RUMOR task.", "As shown in Table 4, our UNIFIED M2 encoder can quickly adapt to new tasks, even with very little in-domain data.", "While both the single-task models and UNIFIED M2 significantly outperform vanilla RoBERTa, UNIFIED M2 further outperforms the single-task models, indicating that multi-task learning can aid task generalizability.", "Dataset We use the previously introduced RUMOR dataset, which includes nine separate events, for this experiment.", "A group of works (Kochkina et al., 2018; Li et al., 2019; Yu et al., 2020) have used this dataset in a leave-one-event-out cross-validation setup (eight events for training and one event for testing) to take event generalizability into consideration in their model evaluation.", "We conduct a supplementary experiment following this evaluation setup for the completeness of our analysis.", "Experiment First, we train the UNIFIED M2 encoder without RUMOR data, and then fine-tune and evaluate in the leave-one-event-out cross-validation setup.", "Note that we re-train the UNIFIED M2 encoder to ensure that it has no knowledge of the left-out-event testset.", "Results in Table 5 show that our proposed method outperforms two recent SoTA models (Li et al., 2019; Yu et al., 2020) by an absolute 16 .", "44% and 25 .", "14% in accuracy.", "This indicates that unified misinformation representations are helpful in event generalizability as well.", "Existing misinformation works take three main approaches: Content-based approaches examine the language of a document only.", "Prior works have looked at linguistic features such as hedging words and emotional words (Rubin et al., 2016; Potthast et al., 2018; Rashkin et al., 2017; Wang, 2017).", "Fact-based approaches leverage evidence from external sources (e.g., Wikipedia, Web) to determine the truthfulness of the information (Etzioni et al., 2008; Wu et al., 2014; Ciampaglia et al., 2015; Popat et al., 2018; Thorne et al., 2018; Nie et al., 2019).", "Finally, social-data-based approaches use the surrounding social datasuch as the credibility of the authors of the information (Long et al., 2017; Kirilin and Strube, 2018; Li et al., 2019) or social engagement data (Derczynski et al., 2017; Ma et al., 2018; Kwon et al., 2013; Volkova et al., 2017).", "Though prior works have explored multi-task learning within misinformation, they have focused exclusively on one domain.", "These works try to predict two different labels on the same set of examples from a single (Kochkina et al., 2018) or two closely-related datasets (Wu et al., 2019).", "In contrast, our proposed approach crosses not just task or dataset boundaries, but also format and domain boundaries.", "Furthermore, prior works focus on using an auxiliary task to boost the performance of the main task, while we focus on using multitasking to generalize across many domains.", "Thus, the focus of this work is not the multitask paradigm, but rather the unification of the various domains, using multitasking.", "In this paper, we introduced UNIFIED M2, which unifies multiple domains of misinformation with a single multi-task learning setup.", "We empirically showed that such unification improves the model's performance against strong baselines, and achieves new state-of-the-art results.", "Furthermore, we show that UNIFIED M2 can generalize to out-of-domain misinformation tasks and events, and thus can serve as a good starting point for others working on misinformation." ]
[ "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "objective", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "result", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "objective", "result" ]
[ "In this paper, we present the first comprehensive categorization of essential commonsense knowledge for answering the Winograd Schema Challenge (WSC).", "For each of the questions, we invite annotators to first provide reasons for making correct decisions and then categorize them into six major knowledge categories.", "By doing so, we better understand the limitation of existing methods (i.e., what kind of knowledge cannot be effectively represented or inferred with existing methods) and shed some light on the commonsense knowledge that we need to acquire in the future for better commonsense reasoning.", "Moreover, to investigate whether current WSC models can understand the commonsense or they simply solve the WSC questions based on the statistical bias of the dataset, we leverage the collected reasons to develop a new task called WinoWhy, which requires models to distinguish plausible reasons from very similar but wrong reasons for all WSC questions.", "Experimental results prove that even though pre-trained language representation models have achieved promising progress on the original WSC dataset, they are still struggling at WinoWhy.", "Further experiments show that even though supervised models can achieve better performance, the performance of these models can be sensitive to the dataset distribution.", "WinoWhy and all codes are available at: https://github.com/ HKUST-KnowComp/WinoWhy .", "Commonsense reasoning, as an important problem of natural language understanding, has attracted much more attention in the NLP community recently (Levesque et al., 2012; Zhou et al., 2018; Ostermann et al., 2018; Talmor et al.,", "2019).", "Among all developed commonsense reasoning tasks, the Winograd Schema Challenge (WSC) (Levesque et al., 2012), which is a hard pronoun coreference resolution task, is one of the most influential ones.", "All questions in WSC are grouped into pairs such that paired questions have minor differences (mostly one-word difference), but reversed answers.", "For each question, we denote the other question in the same pair as its reverse question.", "One pair of the WSC task is shown in Figure 1. Based on the design guideline of WSC, all commonly used features (e.g., gender, plurality, and co-occurrence frequency) do not have any effect.", "Human beings can solve these questions because of their shared commonsense knowledge.", "For example, ordinary people can know that the pronoun it' in the first sentence refers to fish' while the one in the second sentence refers to worm' because hungry' is a common property of something eating things while tasty' is a common property of something being eaten.", "Conventionally, people tried to leverage crowd-sourced commonsense knowledge bases (Liu et al., 2017) or search engines (Emami et al., 2018) to solve the WSC task, but performances of these models are not satisfying.", "Recently, pre-trained language representation models (Kocijan et al., 2019; Radford et al., 2019; Liu et al., 2019) have demonstrated significant improvements in both unsupervised and supervised settings.", "However, as these approaches treat the concept com-monsense knowledge' as a black box, we are not Figure 2: One example from the WinoWhy dataset.", "clear about why they can do better (e.g., can these models understand commonsense or they just capture the statistical bias of the dataset) and do not know how to further improve them.", "To answer these two questions, in this work, we present the first deep diagnosis of essential commonsense knowledge for answering WSC questions.", "Specifically, we invite annotators to first provide reasons for why they choose the answers when they answer the questions, and then group all the WSC questions by different types of used commonsense knowledge (e.g., the property of entities, temporal knowledge, or spatial knowledge).", "By doing so, we can then analyze what kinds of commonsense knowledge can be well represented and understood by current models and more importantly, we can be clear about what kinds of commonsense knowledge are still challenging for current models, which could be an important future research direction for solving not only the WSC task but also the general commonsense reasoning problem.", "After the diagnosis, based on the collected reasons, we also create a new task WinoWhy, which aims at better evaluating models' abilities to understand commonsense knowledge.", "For each question in the WSC task, we pair it with several reasons.", "Models are required to distinguish the correct reasons from all very similar but wrong candidates.", "From examples in Figure 2, we can see that even though all candidates are highly related to the original question, only one of them is the correct reason for resolving the coreference relation.", "Experimental results show that even though state-of-the-art models can achieve about 90% accuracy on the original WSC task, they are still struggling on WinoWhy questions, which shows that current models are still far away from understanding the commonsense knowledge.", "Moreover, by conducting experiments on both WSC and WinoWhy tasks, we prove that even though supervised models can achieve better performance, these models can be sensitive to the dataset distribution, which indicates that the improvement is probably coming from better capturing the statistical bias of the dataset rather than better understanding the required commonsense knowledge.", "The rest of the paper is organized as follows.", "In Section 2, we present the diagnosis of essential commonsense knowledge for answering WSC questions, which includes the reason collection and categorization.", "After that, we show how we create WinoWhy in Section 3. In Sections 4 and 5, we introduce the detailed experiments and analysis on both the original WSC and the proposed WinoWhy tasks.", "We introduce the related work about commonsense reasoning in Section 6. In the end, we conclude this paper with Section 7. 2 Commonsense Knowledge Diagnosis Commonsense reasoning is often viewed as one of the most challenging AI tasks and we still do not have a principled way of solving it.", "One important reason behind this is that, due to the vague definition of commonsense knowledge, we are not clear about what the essential knowledge types are and thus we are unclear about how to represent, acquire, and use them.", "As a result, we can only treat commonsense knowledge as a black box and try to learn it from limited training data.", "To explore a principled way of representing commonsense knowledge and solving commonsense reasoning problems, we take the Winograd Schema Challenge as the breaking point to conduct a detailed diagnosis of what kinds of knowledge are essential for answering these questions.", "To be specific, we first ask human beings to provide reasons why they make the correct decisions for all WSC questions.", "After that, we categorize these reasons by the involved knowledge types (e.g., the property of objects, temporal knowledge, or spatial knowledge).", "By doing so, we are more clear about how to acquire, represent, and apply such knowledge.", "Details are introduced as follows.", "To collect high-quality reasons for answering all WSC questions, we employ the Amazon Mechanical Turk (MTurk) platform for our annotations and", "design a two-phase annotation procedure to collect the knowledge.", "In the first phase, we ask annotators to provide reasons for all WSC questions.", "Detailed instructions are provided such that annotators can fully understand the task.", "As each question may have multiple plausible reasons, for each question, we invite five annotators to provide reasons based on their own judgments.", "A screenshot of the survey is shown in Figure 3. As a result, we collect 1,365 reasons.", "As the quality of some given reasons might not be satisfying, we introduce the second round annotation to evaluate the quality of collected reasons.", "In the second phase, for each reason, we invite five annotators to verify whether they think the reason is reasonable or not.", "If at least four annotators think the reason is plausible, we will accept that reason.", "As a result, we identify 992 valid reasons.", "After collecting all reasons, we categorize them into different groups based on the used knowledge types.", "We first introduce the selected knowledge types and then introduce the detailed annotation procedure.", "A good categorization standard should have two properties: (1) Broad Coverage: it should cover most cases; (2) Exclusive: there should be clear boundaries between different categories.", "Following these standards, we found following two categorization methods of commonsense knowledge: 1. Conceptual Semantic Theory : According to Jackendoff's original theory (Jackendoff, 1990), the semantics of human language can be expressed with a finite set of mental primitives and a finite set of principles of mental combination.", "As claimed by Jackendoff, even though the definition of mental primitives may vary based on different data or languages, some common primitives (i.e., entity , property , number , location , state , event , and activity ) can be observed.", "These common primitives can thus be used as knowledge types for the commonsense knowledge categorization.", "2. ConceptNet : As one of the most popular commonsense knowledge resources, ConceptNet 1.0 (Liu and Singh, 2004) defines 20 commonsense relations, which belong to eight categories (i.e., K-lines , Things , Agents , Events , Spatial , Causal , Functional , and Affective ).", "In the latest version of ConceptNet (Speer et al., 2017), more relations (e.g., RelatedTo') from other resources are merged into ConceptNet.", "As they are relatively vague, we still follow the definition in ConceptNet 1.0 for the commonsense knowledge categorization.", "As there exist some overlaps between semantic primitives and categories in ConceptNet (e.g., Agents ' and Functional ' both describe certain properties of some objects), we first adopt all the commonly observed primitives in (Jackendoff, 1990) as the base knowledge types and then modify them based on the definition of categories from ConceptNet.", "For example, three primitives ( activity , state , and event ) and Events from ConceptNet can all be covered by the definition of Eventuality (P. D. Mourelatos, 1978).", "For the simplicity of the categorization and the quality of the annotation, we merge them.", "At the current stage, we remove K-lines ' because it contains relations like ConceptuallyRelatedTo', which is relatively vague and difficult to be distinguished from other categories.", "Another exceptional knowledge type is Causal ' from ConceptNet.", "During the annotation, we found out that annotators had difficulty understanding the strict definition of Causal ' in ConceptNet (i.e., One event contributes to the cre-ation of another one) and tended to annotate all reasons as Causal ' because they think all reasons can somehow cause' the decision making.", "To make sure that all categories are easy for annotators, which are mostly ordinary people, to distin-Name Definition Example Property Knowledge about property of objects.", "guish, we remove Causal'.", "As we cannot guarantee that selected knowledge types could cover all kinds of knowledge, an additional type Others' is provided.", "Names, definitions, and examples of selected knowledge types are shown in Table 1. 2.2.2 Annotation For each collected valid reason, we invite annotators to select the knowledge type that can best describe the reason.", "Note that each reason may contain inference over multiple knowledge types.", "Thus, for each reason, we invite five different annotators to provide annotations.", "Each annotators are provided with detailed instruction of the job, descriptions of each candidate category, and examples for the category.", "As a result, we collect 4,960 annotations.", "We show the distribution of annotation results in Figure 4.", "From the distribution, we can see that all knowledge types are very important, especially the knowledge about objects (e.g., cats have ears') and eventualities (e.g., people who give help often receive thanks later').", "Besides that, we also notice that only 17% of all reason annotations (839) are Others', which indicates that the selected five categories can effectively cover 83% of the cases and thus the selected knowledge types fulfill the broad coverage requirement.", "We evaluate the annotation quality by average inner annotator agreement (IAA) and kappa coefficient (McHugh, 2012).", "We compute the IAA pair-wisely among all annotators.", "For each reason, if two annotators give the same knowledge type, we label it as agreed, otherwise, we label it as dis-agreed.", "The average IAA is 78.72%.", "We calculate the kappa coefficient based Figure 4: Distribution of different knowledge types.", "on the five raters and five categories setting and the result is 0.804.", "Considering that the annotation task is a multiple-choice task, such an agreement can indicate that the survey is well designed and annotators can clearly understand the task.", "For each WSC question, we select the most popular knowledge type among all valid reasons as the question's major knowledge type.", "If multiple knowledge types have the same votes, we assume that question has multiple knowledge types.", "As a result, 222 questions have single knowledge type and 51 questions have multiple knowledge types.", "Each question in WinoWhy is defined as follows.", "Given a pronoun coreference resolution question and its correct answer from the original WSC data, models are asked to select all plausible reasons for making the correct prediction.", "WinoWhy can thus be viewed as a natural followup of the original WSC task and can help better understand models' commonsense reasoning abilities.", "For each question, three kinds of candidate reasons are selected for annotators to annotate.", "The first reason resource is human annotation, which effectively represents how human beings solve these questions.", "Besides that, to collect some very similar but wrong reasons as negative examples, we consider the reasons provided by humans for the reverse question as a potential challenging wrong reason resource.", "Last but not least, besides reasons provided by human beings, we also lever-Figure 5: Distribution of reason plausibility score.", "age a strong generation model (i.e., GPT-2 (Rad-ford et al., 2019)) to generate reasons.", "We provide the same questions that we showed to humans before (e.g., The fish are the worm. it was hungry. It refers to fish because') to the generation model and ask it to finish the sentences.", "For each question, we leverage the beam search to find the top five generated reasons.", "Merging all resources, we get 4,095 reasons for the next step annotation.", "Similar to previous annotations, we invite annotators from Amazon Turk to help annotate whether the reasons are plausible or not.", "For each reason, we invite five different annotators and determine the plausibility score of each reason by voting.", "For example, if four out of the five annotators think one reason is plausible, its plausibility score is then 0.8.", "We use the same survey to annotate the plausibility of different reasons as Section 2.1.", "As a result, we collect 20,475 annotations.", "The average IAA is 91.49% and the kappa coefficient (five raters and two categories) is 0.880.", "We show the distribution of annotation results in Figure 5, from which we can make the following observations.", "First, most of the reasons given by humans are reasonable, which fits our previous observation.", "Second, even though the majority of reverse reasons are not plausible, which fits our assumption, some of them do make sense.", "One scenario is that when the reason is comparing some property of both candidates, it can be used for both questions.", "For example, for the question pair The trophy doesn't fit into the brown suit-Reason PlausibilityScore of the circumstances of his birth. -C.B. 0.0/1.0 he's the one who's given him the money to do so.", "case because it is too small/large, explanations like Only small objects can fit into large objects are plausible for both questions.", "Last but not least, not surprisingly, most of the reasons generated by GPT-2 have relatively low quality.", "To analyze why the reasons generated by GPT-2 are not satisfying, we show one example in Table 2. Based on the five reasons, we can find two limitations of GPT-2: (1) it could generate some meaningless words (e.g., -C.B.'), which could influence the overall quality significantly; (2) some of the answers are related and complete sentences by themselves, but they are not a valid reason for the question.", "For example, the second reason is wrong because Charlie cannot be the one who has given the money.", "These observations show that understanding commonsense knowledge is still a challenging task for current pre-trained language representation models like GPT-2.", "If at least four out of five annotators regard one reason as plausible, we label it as a positive reason.", "If only one or zero annotators think it is plausible, we label it as a negative reason.", "All others are labeled as acceptable reasons.", "To ensure the clear boundary between positive and negative examples in WinoWhy, only positive and negative reasons are selected to evaluate models.", "In total, WinoWhy contains 1,270 positive and 1,595 negative examples.", "In this section, we present the performance of current models on WSC.", "By doing so, we can better understand their strengths and limitations.", "Recently, pre-trained language representation models have achieved significant improvement on the WSC task.", "In this section, we evaluate the following three models: 1. BERT (Devlin et al., 2019): As a powerful con-textualized word representation model, it has been proven helpful in many downstream NLP tasks.", "As shown in (Kocijan et al., 2019), we can first convert the original WSC task into a token prediction task and then leverage BERT to solve the problem.", "We denote the base and large model of BERT as BERT (base) and BERT (large) respectively.", "2. GPT-2 (Radford et al., 2019): GPT-2 is one of the best pre-trained language models for generation tasks.", "As reported in the original paper, we can first replace the pronouns with different candidates and leverage the probability of the full or partial sentences to make the prediction.", "Here we evaluate the small (117 M parameters) and the large (774 M parameters) models and denote those settings as GPT-2 (small, full), GPT-2 (small, partial), GPT-2 (large, full), and GPT-2 (large, partial) respectively.", "3. RoBERTa (Liu et al., 2019): RoBERTa is a recent improved version of BERT with larger amount of training instances and techniques such as dynamic masking, which performs consistently better than BERT over many benchmark datasets.", "We denote the base and large models of RoBERTa as RoBERTa (base) and RoBERTa (large) respectively.", "Besides unsupervised models, as indicated by (Kocijan et al., 2019), fine-tuning BERT with a similar pronoun resolution dataset WSCR (Rahman and Ng, 2012) can help boost the performance.", "A later work (Sakaguchi et al., 2019) has further enhanced the performance by fine-tuning RoBERTa with a larger and more balanced dataset WinoGrande.", "Statistics of these datasets are presented in Table 3. In our experiments, we evaluate the combination of different pre-trained models and fine-tuning datasets, and denote them as BERT (base/large) + WSCR/Grande and RoBERTa (base/large) + WSCR/Grande respectively.", "From the result in Table 4, we can make following observations: (1) Larger models perform better on all knowledge types due to their stronger semantic representation abilities; (2) The partial version of GPT-2 significantly outperforms the full version, which is consistent with the observation in (Trinh and Le, 2018) and is mainly because the influence of imbalanced distribution of candidate words are relieved by only considering the sentence probability after the pronouns.", "Such observation also explains why GPT-2 can outperform unsupervised BERT on WSC because models based on BERT, which rely on predicting the probability of candidate words, cannot get rid of such noise; (3) For most models, questions that require spatial knowledge are the most challenging ones.", "One possible explanation is that the inference over spatial knowledge is often triggered by a preposition (e.g., in' or behind'), which is challenging for language representation models to remember without enough training corpus for spatial knowledge specifically; (4) Questions belonging to Others' involve more complex inference, even over multiple types of knowledge and thus most models perform poorly on that.", "The only exception is RoBERTa, which leverages its strong language representation ability to overcome such a challenge; (5) Fine-tuning over WinoGrande significantly boosts the performance.", "Besides the above analysis, we are also interested in how different models perform on questions that require complex reasoning types.", "Thus we divide all WSC questions based on how many knowledge types are required to solve these questions and show the result in Table 5.", "Based on the result, we can see that relatively small models (e.g., BERT (base) and RoBERTa (base)) perform better on questions that require single knowledge types rather than multiple knowledge types.", "However, for large models (e.g., BERT (large) and RoBERTa (large)), as long as the suitable fine-tuning dataset is provided, they can achieve Model Property Object Eventuality Spatial Quantity Others Overall (32) (82) (88) (64) (20) (48) (273) BERT (base) 56.25% 64.63% 50.00% 57.81% 50.00% 45.83% 56.04% BERT (large) 56.25% 62.20% 62.50% 67.19% 45.00% 52.08% 61.90% RoBERTa (base) 43.75% 51.22% 56.82% 51.56% 55.00% 39.58% 51.65% RoBERTa (large) 50.00% 51.22% 52.27% 48.44% 65.00% 56.25% 52.75% GPT-2 (small, full) 56.25% 51.22% 55.68% 51.56% 60.00% 47.92% 52.75% GPT-2 (small, partial) 43.75% 60.98% 53.41% 51.56% 60.00% 54.17% 53.48% GPT-2 (large, full) 68.75% 68.29% 61.36% 53.13% 55.00% 45.83% 59.34% GPT-2 (large, partial) 65.63% 75.61% 72.73% 62.50% 65.00% 60.42% 69.23% BERT (base) + WSCR 71.88% 64.63% 55.68% 59.38% 65.00% 45.83% 59.71% BERT (large) + WSCR 81.25% 75.61% 73.86% 67.19% 85.00% 64.58% 71.43% BERT (base) + Grande 65.63% 58.54% 60.23% 59.38% 55.00% 56.25% 60.34% BERT (large) + Grande 75.00% 70.73% 77.27% 79.69% 75.00% 68.75% 73.63% RoBERTa (base) + WSCR 62.50% 60.98% 57.95% 64.06% 55.00% 64.58% 63.00% RoBERTa (large) + WSCR 84.38% 84.15% 79.55% 76.56% 70.00% 81.25% 80.95% RoBERTa (base) + Grande 75.00% 67.07% 72.73% 75.00% 80.00% 70.83% 72.16% RoBERTa (large) + Grande 90.63% 84.15% 93.18% 84.38% 90.00% 89.58% 87.55% Table 4: Performances of different models on WSC questions.", "similar and even better performance on the complicated questions.", "In general, this observation is consistent with our previous observations that large models are capable of solving complex questions from the Others' category with the support of suitable fine-tuning datasets.", "In this section, we conduct experiments to investigate whether current models can understand how human beings solve WSC questions.", "Experiment Details: To evaluate whether pre-trained language representation models, which achieve the state-of-the-art performance on the WSC task, can distinguish the plausible reasons against the wrong ones, following (Kocijan et al., 2019; Radford et al., 2019; Sakaguchi et al., 2019), we first connect the questions and candidate reasons into single sentences, put them into the models, and take the returned probability as the prediction.", "Higher probability indicates higher plausibility prediction.", "Best thresholds are selected for different models to calculate the final accuracy.", "Similar to Section 4, we evaluate BERT (base), BERT (large), GPT-2 (small), GPT-2 (large), RoBERTa (base), and RoBERTa (large) on WinoWhy.", "For GPT-2 models, as the partial setting has been proved more useful, we only report the performances based on the partial setting.", "Besides these two, we also consider BERT/RoBERTa + WSCR/Grande combinations as additional unsupervised approaches because they are not directly optimized towards the WinoWhy task.", "Result Analysis: Based on the results shown in Table 6, we can observe that even though pre-trained language representation models have achieved significant improvement over the original WSC task, they are still struggling on the WinoWhy task.", "Moreover, experimental results on different knowledge types prove that such a conclusion is universal rather than for a specific Model Property Object Eventuality Spatial Quantity Others Overall (337) (856) (928) (674) (206) (496) (2865) Majority Voting 54.30% 56.31% 56.47% 52.67% 52.43% 55.24% 55.67% BERT (base) 56.97% 56.54% 56.25% 54.01% 51.94% 55.44% 55.92% BERT (large) 56.38% 57.24% 56.14% 53.41% 51.94% 56.65% 56.13% RoBERTa (base) 54.30% 56.31% 56.90% 52.67% 52.91% 55.44% 55.78% RoBERTa (large) 54.30% 56.43% 56.47% 52.67% 52.43% 55.04% 55.67% GPT-2 (small) 56.68% 54.91% 57.11% 54.45% 59.71% 57.66% 56.37% GPT-2 (large) 57.57% 54.44% 54.42% 55.93% 54.85% 54.84% 55.77% BERT (base) + WSCR 55.49% 56.31% 56.90% 52.97% 51.94% 55.04% 55.71% BERT (large) + WSCR 56.97% 56.31% 56.79% 53.12% 52.91% 55.04% 55.99% BERT (base) + Grande 57.27% 56.43% 57.22% 53.41% 52.91% 55.24% 55.99% BERT (large) + Grande 54.90% 56.07% 56.57% 52.67% 52.91% 55.44% 55.71% RoBERTa (base) + WSCR 52.82% 55.61% 58.41% 53.26% 56.31% 55.04% 56.19% RoBERTa (large) + WSCR 54.90% 58.06% 56.90% 52.08% 52.91% 56.85% 56.23% RoBERTa (base) + Grande 56.08% 58.88% 58.19% 55.64% 57.28% 57.66% 58.05% RoBERTa (large) + Grande 56.08% 58.06% 59.59% 56.82% 56.80% 58.06% 58.18% Table 6: Performances of different models on WinoWhy questions.", "kind of knowledge.", "One possible reason is that even though the designers of WSC are trying to avoid any statistical correlation between the answer and the trigger word, such statistical correlation still exists.", "As a result, pre-trained language representation models can learn such correlation from large-scale training corpus and thus can answer WSC questions without fully understanding the reasons behind.", "Besides that, another interesting finding is that GPT-2 (large), as the best unsupervised model on WSC, performs poorly on WinoWhy.", "One possible explanation is that a lot of negative examples are generated with GPT-2 (large), and thus the dataset brings extra challenges for GPT-2 (large).", "Last but not least, we can find that fine-tuning over similar dataset (i.e., WSCR and WinoGrande) can slightly help RoBERTa, but the effect is still quite limited.", "This is probably because such a fine-tuning procedure only teaches pre-trained models to better answer WSC questions rather than understand the commonsense knowledge behind.", "Besides the unsupervised setting, we are also interested in whether a model can learn to distinguish reasons through supervised learning.", "Here, we randomly divide the annotated dataset into five groups and conduct five-fold cross-validation.", "We tried two different splitting meth-Setting Model Accuracy std Five-fold", "ods, one is based on the WSC questions and the other one is based on the reasons.", "We denote these two settings as Five-fold", "(q) and Five-fold", "(r) respectively.", "As WinoWhy can be viewed as a text classification task, we adopt the traditional encod-ing+classification framework and leverage a two-layer feed-forward neural network as the classification module.", "Seven different encoding methods (Bi-LSTM (Hochreiter and Schmidhuber, 1997), BERT (base), BERT (large), GPT-2 (small), GPT-2 (large), RoBERTa (base), and RoBERTa (large)) are evaluated.", "For LSTM, we choose the number of layers to be two, the hidden embedding dimension to be 300, and Glove (Pennington et al., 2014) to be the word embedding.", "All models are trained for ten epochs.", "Average accuracies over folds and standard deviations are reported.", "The results in Table 7 demonstrate that in general, WinoWhy is a challenging task as the best supervised model can only achieve 77.77% accuracy on a two-class classification task.", "Besides that, we also notice that all models are getting relatively large standard deviations, especially under the Five-fold", "(r)' setting, which may imply that these supervised models are sensitive to the dataset distribution.", "Both of these observations show that training a supervised model on WinoWhy is not enough to fully understand the reasons behind WSC decisions and we may need to include reasoning over more complex knowledge to solve this challenging problem.", "Based on the observations that fine-tuning over WSCR and WinoGrande can only help solve WSC rather than WinoWhy and the machine-learning based models over WinoWhy can be sensitive to the dataset distribution, it is reasonable to suspect that the improvement achieved by fine-tuning over a similar or same dataset might come from better dataset fitting rather than better commonsense reasoning.", "As the original purpose of proposing both WSC and WinoWhy is to evaluate how good current AI systems can understand commonsense knowledge rather than solve these questions by fitting the dataset, the unsupervised setting might be the more reasonable evaluation setting.", "As an important knowledge resource for many ar-tificial intelligence systems, commonsense knowledge covers various knowledge categories like causality (Sap et al., 2019), reasoning (Schu-bert, 2015), property (Liu and Singh, 2004), and quantity (Elazar et al., 2019), and has been proven crucial in many downstream tasks like question answering (Lin et al., 2019), dialogue system (Zhou et al., 2018), reading comprehension (Wang et al., 2018), and pronoun coreference resolution (Levesque et al., 2012).", "Among all these tasks, Winograd Schema Challenge (WSC) (Levesque et al., 2012) is viewed as one of the most challenging ones because solving WSC questions typically requires inference over various kinds of commonsense knowledge.", "Conventionally, people tried to solve WSC questions in an unsupervised way by leveraging either search engines (Emami et al., 2018), linguistic knowledge (Zhang et al., 2019, 2020), or language representation models (Kocijan et al., 2019).", "Experimental results showed that these models still cannot fully solve the problem but we are not clear about how to further improve them.", "One important reason behind this is that the conventional definition of commonsense knowledge is too vague and thus we are not clear about what kinds of knowledge are still challenging for current commonsense reasoning models.", "In this paper, we use the WSC task as the breaking point to conduct a deep diagnosis of essential commonsense knowledge types, which sheds some light on how to achieve a better commonsense reasoning system in the future.", "In this paper, we presented the first deep diagnosis of essential commonsense knowledge for answering Winograd Schema Challenge questions.", "By doing so, we better understand the strengths and limitations of current commonsense reasoning models.", "More importantly, we better know about what kinds of commonsense knowledge are required to be acquired for better commonsense reasoning.", "On top of the collected reasons, we develop a new task called WinoWhy, which requires models to select the plausible reasons for answering WSC questions.", "Experiments show that even though current models have gained significant improvement over the original WSC task, they still cannot fully understand the reasons behind.", "This paper was supported by the Early Career Scheme (ECS, No. 26206717) from the Research Grants Council in Hong Kong and the Tencent AI Lab Rhino-Bird Focused Research Program." ]
[ "objective", "objective", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "objective", "objective", "method", "result", "objective", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "other" ]
[ "Spotify [email protected]", "Spotify [email protected]", "Abstract", "While there is an abundance of popular writing targeted to podcast creators on how to speak in ways that engage their listeners, there has been little data-driven analysis of podcasts that relates linguistic style with listener engagement.", "In this paper, we investigate how various factors vocabulary diversity, distinctiveness, emotion, and syntax, among others correlate with engagement, based on analysis of the creators' written descriptions and transcripts of the audio.", "We build models with different textual representations, and show that the identified features are highly predictive of engagement.", "Our analysis tests popular wisdom about stylistic elements in high-engagement podcasts, corroborating some aspects, and adding new perspectives on others.", "What makes a particular podcast broadly engaging?", "As a media form, podcasting is new enough that such questions are only beginning to be understood (Jones et al., 2021).", "Websites exist with advice on podcast production, including language-related tips such as reducing filler words and disfluencies, or incorporating emotion, but there has been little quantitative research into how aspects of language usage contribute to overall listener engagement.", "This paper investigates the linguistic factors that correlate with engagement, leveraging the written descriptions of the parent show and episode as well as the transcript of the audio.", "Our metric of engagement is stream rate , which we define as the proportion of first-time listeners of those who have begun streaming the episode who listen for at least five minutes.", "Notably, stream rate is different from the metric of popularity as given by the raw number of streams; the latter is inevitably influenced by factors unrelated to the content, such as the host or publisher reputation, publicity, exposure in recommendations and search engines, and time of publication, whereas a listener's decision to continue listening for as long as five minutes is likely to be influenced by the content.", "We perform a series of descriptive tests to examine differences in language usage between high and low engagement podcasts, and build predictive models.", "Our tests show that while much of the conventional wisdom on engaging podcasting style (such as to use positive language) bears out in the data, other assumptions (such as to speak slowly) are contradicted and deserve a closer look.", "We find that stylistic features tend to be more correlated with engagement for podcasts with low absolute numbers of streams than for the most popular podcasts, suggesting that listeners may be less sensitive to style in podcasts made by well-known creators.", "We also identify those linguistic factors that correlate with our engagement metric across the popularity spectrum, and those that are limited to podcasts within a certain popularity range.", "Our predictive models prove that stylistic factors alone play a significant role in determining if a podcast has high or low engagement, achieving an accuracy of 72% in distinguishing between very high engagement (top 25% of podcasts by stream rate in the corpus) and very low engagement (bot-tom 25% ) examples.", "We also show that the overall textual information in podcasts is highly predictive of engagement in this experiment, with an accuracy as high as 81% .", "To understand how style in podcasts compares to other spoken media, we apply our analysis to a corpus of TED talks.", "Finally, we manually examine the highest engagement podcasts in our dataset to characterize their content.", "Content-Based Podcast Recommendations Yang et al. (2019) model transcripts with a topic", "model, and the audio with a representation they trained to predict the non-textual attributes of seriousness and energy.", "They find that combining these representations improves over the purely topic based model on popularity prediction.", "This work indicates that stylistic attributes are important factors, and raises the question of whether stylistic features derived from the text are valuable as well.", "Tsagkias et al. (2010) develop a framework containing a set of attributes, and compare the proportions of these attributes relative to engagement on iTunes.", "Our work follows a similar spirit, but we address some limitations of their study, namely, they use a small set of podcasts ( 250 ), and manually annotate the attributes for every podcast rather than deriving them from the raw data.", "Since we derive all features automatically, we limit ourselves to concrete, easily quantifiable features, whereas the above paper considers higher level attributes like one topic per episode' or fluent'.", "Predicting Performance from Language Previous research in natural language processing has explored the connections between textual features and audience engagement in books (Ganji-gunte Ashok et al., 2013; Maharjan et al., 2018), YouTube (Kleinberg et al., 2018), news (Naseri and Zamani, 2019), TED talks (Tanveer et al., 2018), and tweets (Tan et al., 2014; Lampos et al., 2014).", "Other works have modeled the relationship between text and various performance metrics such as movie quote memorability (Danescu-Niculescu-Mizil et al., 2012), forecasting ability (Zong et al., 2020), congressional bill survival (Yano et al., 2012), success of job interviews (Naim et al., 2016), and impact of academic papers (Yo-gatama et al., 2011; Li et al., 2019), in addition to the entire field of sentiment and opinion mining of data such as user reviews (Pang et al., 2002).", "The Spotify Podcast Dataset (Clifton et al., 2020; Jones et al., 2020) is a recently released corpus of over 100 , 000 podcast episodes, mostly in English, that are transcribed with Google's Speech to Text commercial speech recognition, reported in the paper to have an 18% word error on podcasts.", "A podcast, also known as a show' in the dataset, is a collection of episodes.", "In addition to the speech transcripts, the textual information associated with each podcast episode includes the title and description of the episode and the parent show (Table 1).", "In this paper, we consider descriptions and transcripts as the text representation of an episode.", "All textual data was normalized and part-of-speech tagged with spacy.", "1 3.1 Ads and promotions Since many episode descriptions contain promotions, advertisements, and show notes, which are extraneous to the main content of the podcast, we remove such material before analysis (although we also measure the amount of ad content as a feature).", "2 Promotional and extraneous material was detected by the classifier described by Reddy et al. (2021), a model using BERT with a classification head, trained on a manually annotated set of episode descriptions.", "This classifier is reported to have a sentence classification accuracy of 95% on episode descriptions.", "We obtained streaming numbers for the episodes in the corpus from Spotify, a music and podcast streaming platform.", "The numbers were aggregated from the date of the episode's publication on the platform until December 2020.", "Since the most recently published episode in the dataset is from February 2020, all episodes had several months of exposure by the time of collection.", "We specifically consider streaming by first-time listeners' who are not already familiar with the show, i.e., those who have not previously streamed any other episode of that show for more than five minutes.", "Listeners who are familiar with the show through other episodes are ignored since they may be habituated and primed for the content.", "As described in the introduction, we use stream rate as the engagement metric, defined as the proportion of the show's first-time listeners who stream at least five minutes of the episode.", "Stream rate in the dataset shows a weak but statistically significant inverse rank correlation with popularity (Spearman's = 0 . 12 , p < 0 . 001 ).", "This may be because popular podcasts attract more listeners who may realize they are not interested in the content soon after they begin streaming, while the listeners of less popular podcasts may have actively sought them out.", "70% stream rate in a well-known podcast which 1 spacy.io (Honnibal et al., 2020), with the large English web model, en core web lg v.2.3.1.", "2 Initial experiments showed weaker effects of stylistic features on engagement when such extraneous content was included in the analysis.", "Show title Witch Wednesdays Show description A weekly podcast covering all things witchcraft in the modern world.", "Join us, two best friends and Midwestern witches (one Wiccan, one not), as we dive into all things witchy.", "We're starting at the beginning, making this podcast a great resource for newbies...", "Episode title Episode 1 What You're In For This Year Episode description Happy New Year!", "Welcome to Witch Wednesdays!", "Join us every Wednesday morning for all things witch and witchcraft.", "In this first episode, we're introducing ourselves and this podcast so you can get an idea about what you're getting yourself into this year...", "Automatic transcript You're listening to which Wednesday's your weekly podcast source for all things witchcraft in the modern world.", "Join your host Stephen Tara every Wednesday morning that they dive into a new Wiki topic.", "Hello and welcome to the very first episode of which Wednesdays.", "I'm Steph and I'm Terrell and together will be co-hosting this podcast Adventure this year rather than...", "would have attracted a broad array of listeners is not comparable to 70% stream rate in a relatively unknown podcast.", "Therefore, we bin the dataset into popularity quartiles for analysis on stream rate, which is found to be uncorrelated with popularity within each quartile.", "Stream rate is uncorrelated with the time of publication.", "We filter out all episodes that are shorter than ten minutes and fewer than a threshold number of total streams.", "To control for duration effects in the analysis of transcripts, we truncate transcripts at ten minutes.", "The original podcast corpus contains multiple episodes for many of the show while other show have only one episode.", "We select the most-streamed episode from each show as its representative, thereby ensuring that every show is represented by a single episode in the data.", "This is done so that shows with several episodes do not have an outsize influence on the models.", "Since the original corpus is an English-language collection, all of our analysis is constrained to English, and we filter out any stray examples in the corpus that are detected as non-English after running language identification (Lui and Baldwin, 2011) on the descriptions.", "The resulting dataset has 5371 episodes.", "The norms of language usage may vary depending on the genre and topics being discussed.", "For example, technical podcasts are expected to contain more complex language compared to chit-chat, crime podcasts to contain words with negative sentiments as opposed to motivational podcasts, and so on.", "The RSS feed of a podcast show contains one or more categories selected by the creators from the Apple iTunes taxonomy; however, these are unreliable, since many of the categories are ambiguous or ill-defined, (e.g. Leisure' which mainly includes gaming podcasts but also general leisure topics, Kids & Family' which includes podcasts Genre Words in Topic mystery door, eye, room, hand, head, night, face, away, looked music song, music, album, artist, listen, love, record, hip, hop investing market, company, stock, investment, investor, trade working out training, gym, fitness, coach, workout, muscle, body entertainment jacob, alice, edward, vampire, max, bella, hamilton, john ad free, episode, app, download, podcasts, listen, place culture world, sort, idea, human, interesting, sense, fact, society education school, student, class, teacher, college, high, kid, grade gaming game, play, playing, new, nintendo, stuff, played, switch food food, eat, coffee, drink, chicken, restaurant, beer, taste tv episode, character, show, scene, season, end, point harry potter harry, mr, potter, charlie, ron, fred, hermione, professor career job, company, team, working, career, industry, experience sports world, team, australia, cup, final, club, week, player biology cell, dna, bond, virus, genetic crime murder, police, crime, case, found, death, killer language word, language, english, spanish, use, learn, speak astronomy space, science, earth, planet, light, solar, scientist, star fillers 1 yeah, oh, okay, yes, exactly, gonna, feel, guess, sure, cool fillers 2 feel, stuff, still, never, went, remember, thought, whatever effusiveness love, great, thank, different, amazing, bit, awesome Table 2: Some examples of LDA topics. The genre labels are manually assigned only to aid interpretation. for kids as well as about parenting), and podcast creators may not always select the most appropriate categories (Sharpe, 2020).", "Furthermore, podcasts span multiple themes and structures, making the assignment of one or two categories per podcast too restrictive.", "Instead, we fit an LDA topic model (Blei et al., 2003) with 100 topics 3 to transcripts of the entire 100k podcast corpus as in previous works (Clifton et al., 2020; Yang et al., 2019), represent each episode by the topic distribution, and measure topic proportions relative to the target metrics in order to contextualize our results on stylistic features.", "Table 2 shows a sample of the inferred topics.", "We define a set of explainable linguistic features that are hypothesized to affect engagement.", "These features have been drawn from different podcasting advice blogs, alongside some of our own intuitions.", "Length Descriptions are known to be important for listeners on their first encounter with the pod-3 The number of topics is selected by optimizing for topic coherence as implemented by the coherence model in the Gensim toolkit ( Reh u rek and Sojka, 2010).", "Proportion of ads and show notes Descriptions of well-known podcasts tend to contain advertisements of other podcasts made by the same network, links to the hosts' or guests' social media presence and websites, or show notes and transcripts, and podcast creators are often advised to include such information (Dennis, 2020), and surveys have shown that the majority of podcast listeners do not mind sponsor ads in the content (McLean, 2020).", "We measure the the proportion of text detected on episode descriptions by the extraneous content classifier described in 3.1.", "The proportion of ads in transcripts is given by a manually identified LDA topic that corresponds to words indicative of ads.", "Faithfulness of episode descriptions to transcripts Length is a weak signal of informativeness.", "Do listeners seem to prefer descriptions that accurately convey the topics and synopsis of the episode?", "We measure faithfulness of the episode description to the first ten minutes of the transcript as the cosine similarity between the TF-IDF bag of words representation of both texts.", "While we do not have ground-truth labels to evaluate this definition of faithfulness, we assessed it to be a good heuristic by anecdotally reviewing some examples.", "4 Distinctiveness Podcast creators are often encouraged to develop a distinctive style (Gray, 2021a).", "We define distinctiveness as the perplexity of the given text under a unigram language model trained over all the episodes in the dataset.", "To control for length, we follow the protocol in Zhang et al. (2019) of randomly sampling a constant number of words from each text and taking the mean cross entropy over a few samples.", "5 Reading Grade Level Similarly to Zong et al. (2020), we make two measurements: the Flesch-Kincaid grade level (Flesch, 1948) that measures the number of syllables per word and the number of words per sentence, and the Dale-Chall grade level (Chall and Dale, 1948) which measures word 4 We found that BERT and related pretrained transformer models are not well suited for this similarity estimation, possibly because of speech recognition errors in the transcripts.", "If ground-truth faithfulness labels were available, such models could be trained to make accurate judgments.", "5 The text was lightly normalized by case-folding and replacing URLs and social media handles with special tokens.", "We fixed the constant number of words as 100 for descriptions and 1000 for transcripts, and sampled over 5 runs.", "difficulty' using a lookup table.", "While caution must be taken on interpreting reading grade level for transcribed speech, these measures have been explored for speech in prior work (Schumacher and Eskenazi, 2016).", "Vocabulary Diversity We examine whether podcast creators of high engagement podcasts use more diverse vocabularies, quantified by the entropy of the unigram words in the text, motivated by advice to avoid word repetition (Bellis, 2017).", "Sentiment and Emotion Popular advice often encourages podcast creators to be upbeat and positive (Briggman, 2020).", "The NRC Emotion Lexicon (Mohammad and Turney, 2013) contains positive and negative sentiment assignments, as well as emotions such as anger, trust, and fear, for 14182 words.", "6 We measure the proportion of words associated to each of the emotions and sentiments.", "Since a lexicon lookup for sentiment is naturally limited in that it does not account for composi-tionality and cannot model words and variants that are missing in the lexicon, we also apply a full-sentence classifier, the sentiment model from the Google Natural Language API 7 .", "The output of the classifier is a score between +1 and 1 for each sentence.", "We define positive and negative polarities for each text as as the proportion of sentences in the text with highly positive (over +0 . 5 ) or highly negative (under 0 . 5 ) scores.", "Syntax Syntactic features are measured by the relative frequencies of each part-of-speech tag.", "While previous work of this nature finds strong effects of syntactic patterns from parses (Ganji-gunte Ashok et al., 2013), we find that the noisy speech transcripts result in particularly noisy parses from off-the-shelf parsers.", "Swearing and fillers We conjecture that podcasts with swearing and adult language may not have broad appeal.", "Public speaking recommendations in podcasting guides (Coips and Kramer, 2020) emphasize the reduction of filler words like yeah' or okay', and the use of professional speech.", "6 We experimented with the method of Demszky et al. (2019) to expand the lexicon for the domain by training GloVe embeddings on the dataset, and then for each emotion, retrieving the words that have the highest mean cosine similarity to the words associated with that emotion.", "However, an examination of the expansions for our dataset showed that they include too many false positives.", "7 https://cloud.google.com/ natural-language , accessed Dec 2020.", "We attempted to manually define lexicons of these types of categories, but found that it is challenging and prone to human biases, especially given the novel domain and automatic transcripts.", "Instead, we take advantage of the observation that some of the topics inferred by the LDA model correspond to swear words and filler terms, and measure the proportions of these topics.", "Speech Rate and Non-Speech Time Podcast creators are often encouraged to speak slowly, since novice speakers tend to rush their delivery (Gray, 2021b).", "Since the transcripts in the dataset contain time alignments of each word, we measure the duration of speech segments in the audio, giving us the speech rate in terms of words per minute.", "We also measure the amount of time spent on non-speech.", "In this section, we analyze the different linguistic features by comparing group means between the top and bottom 25% of podcasts by engagement within each popularity quartile (approximately 335 podcasts per group) with bootstrapped Welch's t-tests.", "We report the group mean differences of LDA topic proportions in order to contextualize results on the other features.", "For LDA features, we note significance after a Bonferroni correction of = 0 .", "05 / 100 , and for the other linguistic features, a Bonferroni correction of = 0 .", "05 / 30 .", "In the results, description' refers to the concatenation of the show description and the representative episode's description.", "When there is an effect from the show description but not the episode's or vice versa, they are explicitly identified as such.", "Among the podcasts in the top popularity quartile, high engagement is associated with topics around lifestyle and culture, mental health, spirituality, and crime, while in the lower popularity quartiles, high engagement podcasts include those about investing, working out, careers, business, parenting, health, art, and relationships.", "Table 3 shows the features with significant differences across between the high and low engagement groups.", "We review the main takeaways from these results.", "High engagement podcasts are longer, and have appropriate descriptions Across all quartiles, podcasts with high engagement tend to be longer on the whole (contrary to advice to keep episodes short), and contain less non-speech in the first ten minutes than the low engagement group.", "They also have descriptions that are more similar to the first ten minutes of the transcripts, which may be because long, faithful descriptions better prepare listeners for the episode.", "The correlation between ads and engagement is mixed Large amounts of ads in transcripts are associated with lower engagement in all but the bottom popularity quartile.", "While this may be explained by the fact that many listeners skip over ads in the audio stream (Reddy et al., 2021), the effect is strong enough to indicate that ads seem to hurt engagement, even though surveys report that most listeners do not mind ads.", "The negative association could be a result of our dataset being constrained to first-time listeners; further analysis needs to be done to understand if it holds of returning listeners.", "Ads in episode descriptions, on the other hand, do not hurt engagement on the whole, and in fact, are associated with higher engagement in the top quartile, likely because much of the detected ad' content in popular podcasts consists of promotional material about the podcast itself, which often includes useful information such as links to the hosts' websites and show notes.", "and mainstream language Vocabulary diversity in descriptions and transcripts is consistently larger in the high engagement group, as is reading grade level.", "High engagement podcasts have more punctuation in their descriptions and more conjunctions (arising from the use of long sentences), adverbs, adpositions, and determiners in their transcripts.", "These syntactic features correlate with the topics such as culture, mental health, investing, and art.", "At the same time, surprisingly, high engagement podcasts use less distinctive language compared to the rest of the corpus than the low engagement group.", "On closer examination, we find that podcasts scoring low on reading grade level also score high on distinctiveness.", "positive sentiments and suspense On the whole, high engagement is associated with more positive and less negative emotions and sentiment.", "This relationship is stronger outside of the top popularity quartile.", "A notable exception is fear' in the top popularity quartile, which is explained by the high engagement of popular crime-related podcasts.", "High engagement podcasts are less likely to contain interjections and swearing As expected, words such as oh', right', and cool' in contexts that the tagger infers as interjections are significantly less likely to occur in high engagement podcasts.", "Similarly, swearing is associated with low engagement.", "Filler words are only negatively associated with engagement in the lowest popularity quartile, though the lack of correlation in other quartiles could be because the LDA topics representing fillers don't model context, and therefore do not capture their discourse function in the way the tagger does for interjections.", "High engagement podcast creators tend to speak relatively fast While popular advice warns presenters against rushing their speech, the data indicates that on average, high engagement is associated with high speech rates, which is also a finding in previous work (Tsagkias et al., 2010).", "Next, we build classifiers to automatically distinguish high and low engagement podcasts.", "The prediction task is treated as a balanced binary classification problem.", "We make a single dataset for podcasts across all quartiles by aggregating the top and bottom K % podcasts by stream rate within each quartile.", "This aggregation is to ensure fair comparisons of podcasts in different quartiles, since a stream rate value that is considered high for a popular podcast, for example, may not be so in the low quartiles.", "Models are trained and evaluated with the same stratified 5-fold cross validation splits.", "We train logistic regression classifiers using different representations of the content: the linguistic features listed previously, the non-stylistic LDA topic proportions, and bag-of-ngrams (unigram and bigram words) with TF-IDF scoring.", "In addition, we train two neural classifiers a feedfor-ward neural network with a single hidden layer, using a paragraph vector representation (Le and Mikolov, 2014) of the document as input 8 , and the pre-trained BERT (Devlin et al., 2019) uncased English model 9 with a classification head, fine-tuned on this task.", "With the linguistic features, we also conduct an ablation study, removing one group of features at a time, to estimate their contributions 8 Paragraph vector embeddings were trained on the descriptions and transcripts of the full 100k+ podcast corpus 9 We used the implementation in the Hugging Face library (Wolf et al., 2020), https://huggingface.co/ bert-base-uncased .", "to predictive performance.", "Prediction accuracies (Table 4) are over 70% with linguistic features only, indicating that the features that we have identified are relatively strong predictors of engagement.", "The reading grade level of descriptions and transcripts makes a big contribution as shown in the ablation results, as do the syntactic features on transcripts.", "Analysis of the weights of the bag of n-grams models surface patterns in language usage that corroborate our analysis on linguistic features swearing and negative sentiment is predictive of low engagement, for example.", "They also suggest subtle dimensions of variation to complement our set of linguistic features.", "In Table 5, we collect some of the most predictive terms and manually group them into classes.", "First or second person pronouns are predictive of high engagement in contrast to third person pronouns.", "This aligns with the finding by Tsagkias et al. (2010) that personal experiences are favored in high engagement podcasts.", "While fillers exist in both groups, the specific terms used are different, with kind of' and literally' being predictive of high engagement in contrast to um' and but like'.", "The conjunction and' is preferred by high engagement podcasts over but', and so' over because'.", "Interrogative words are more predictive of high engagement with the exception of which', as are open-ended and future looking terms like asking', explore', and started' over grounded, immediate terms like make', use', today', and quickly'.", "We emphasize that this is a small qualitative analysis of the most predictive features, and Low engagement High engagement he, she, they, his, her, him, it me, you, us, we, my, our, their, myself, someone um, gonna, oh, like like, because like, but like, such as, okay, all right, you guys, basically and and, sort of, kind of, was like, you know, quite, literally but, because and, so which when, what, who, how all lot of, little bit says asking can, cannot was, were, wasn't make, use explore, wanted today, still, quickly always, started, the time Table 5: Terms in descriptions and transcripts sampled from the top 200 unigrams and bigrams that are highly predictive of engagement.", "more work needs to done to establish which terms are actually used in semantically similar contexts in the data.", "We leave explorations of computable features that encode these aspects to future work.", "On the whole, models with lexical content features perform better than the linguistic signals, which is expected since these models encode more information than a small set of hand-designed features.", "The BERT classifiers achieve nearly 81% accuracy, indicating that podcast content is highly predictive of engagement.", "Table 6 shows how classification accuracies change when the task is to distinguish the top and bottom K % podcasts, with K ranging from 10 to 50 (all reports thus far have been with K = 25 ).", "Performance drops as K increases (and the gap between the two sets thereby decreases) although the amount of training data goes up, showing that the differences in language usage are more predictable at the extremes of engagement.", "To understand how the relationship between linguistic features and engagement in podcasts compares to other spoken media, we carry out the same analysis on a corpus of 2480 talks from the TED Conferences (Tanveer et al., 2018; Acharyya et al., 2020).", "While we don't have access to the stream rate of the lectures, the data includes the total view count and Measurement Popularity quartile 1 (top) 2 3 4 Length and duration Audio duration Length of description Faithfulness of description Distinctiveness Description Transcript Reading grade level Transcript: Flesch-Kincaid Transcript: Dale-Chall Vocabulary diversity Description Transcript Word-level sentiment and emotion Positive sentiment in description Trust in description Trust in transcript Joy in description Anger in transcript Fear in transcript Disgust in description Disgust in transcript Sadness in transcript Syntax Adjectives in description Conjunctions in transcript Particles in description Particles in transcript Pronouns in description Pronouns in transcript Punctuation in description Table 7: Significance of group mean differences between linguistic features of higher and lower engagement (top and bottom 25%) TED talks as given by the proportion of views that left ratings.", "ratings.", "We define engagement as the proportion of total views that left a rating, with the rationale that the act of leaving a rating is roughly analogous to the podcast engagement metric of listening for several minutes.", "Another point of difference between this dataset and the podcasts is that the TED lectures are manually transcribed.", "Therefore, the data is not directly comparable to the podcast dataset, but we carry out the experiment to try to identify which features of high-engagement speech may be universal, and which are podcast-specific.", "We test the same features that we formulated for podcasts, except for LDA topic distributions (due to the small size of the TED corpus relative to the full 100k+ podcast data), and ads and swear words since these occur rarely if at all in TED talks.", "Table 7 shows the group means differences between high and low engagement lectures.", "On the whole, there are fewer significant differences, because either the TED data is more homogenous than podcasts, the metric isn't directly indicative of engagement, or the features that we designed for podcasts don't apply as much for TED talks.", "Like podcasts, higher engagement lectures are longer; however, longer and more faithful descrip-Features Accuracy Chance 50.00 Logistic Regression with Description 64.01 Linguistic Features Transcript 67.99 Description + Transcript 71.15 Logistic Regression with Description 67.02 Bag-of-Ngrams Transcript 67.34 Descriptions + Transcript 68.40 Descriptions 68.67 BERT Transcript 66.72 Description + Transcript 71.92 Table 8: Accuracy of predicting whether a TED talk is high or low engagement.", "tions are actually associated with lower engagement.", "Vocabulary diversity is associated with high engagement, but unlike podcasts, high engagement lectures have lower reading grade levels.", "Since we find that lecture transcripts measure over one grade level higher than podcasts, it could be that after a point, simplicity is rewarded.", "Positive emotions are more significantly associated with engagement compared to the podcast data, which may be because of the inspirational nature of the talks and the relative paucity of crime-related content (and in fact, positive sentiment overall is more prevalent compared to the podcast data).", "There is less variation in syntactic features, possibly because talks are scripted and follow similar templates.", "The syntactic features with correlations tend to follow similar patterns as in podcasts.", "On the prediction task, we achieve up to 71 .", "15% (Table 8) accuracy using only linguistic features, similar to the performance on podcasts.", "However, the bag-of-ngrams features are less predictive than linguistic features, and the BERT model only matches the classifier with linguistic features rather than exceeding it.", "This may be because there isn't as much variation in topical content as in podcasts.", "Our paper centers five minute stream rate as the target metric for analysis and prediction.", "Systems optimized for engagement on social media platforms have the potential to spread misinformation and radical content (Ribeiro et al., 2020), or be manipulated by bad actors (Sehgal et al., 2021).", "On the other side of the coin, studies have found that algorithms driven by engagement do not spread false news at a higher rate than true news (Vosoughi et al., 2018), and that under certain conditions, engagement metrics may actually reward quality content (Ciampaglia et al., 2018).", "Aggregate stream rate in podcasts is a specific engagement metric distinct from metrics and media in previous studies.", "There is limited previous work on engagement in podcasts.", "Holtz et al. (2020) find that algorithms driven by engagement lead to less diverse recommendations; however, that work does not study the relationship between the type of content that is favored by the engagement metric.", "While a comprehensive analysis of podcast engagement is beyond the scope of this work, we manually examine the top 10% of podcast episodes by engagement in our collection, a total of 537 episodes.", "As we noted in 5.1.1, the LDA topics associated with high engagement are broad: lifestyle, mental health, spirituality, crime, investing, working out, careers, business, parenting, health, art, and relationships.", "Our manual audit confirms that high engagement podcast do primarily span these topics.", "In particular, we do not find any episodes containing harmful content, incendiary language, or politically controversial topics in this set.", "We conclude that while the connection between any absolute measure of intrinsic quality and engagement is unknown, high engagement in our study does not correspond to harmful content.", "This paper presents the first quantitative analysis of how linguistic style and textual attributes in podcasts relate to listener engagement using automatically computed features.", "We test several hypotheses, and identify factors that validate popular advice on podcast creation, as well as those with unexpected correlations.", "Our predictive models perform well at distinguishing high and low engagement podcasts using only textual information.", "Our comparison with a similar task on TED data shows similarities and differences between podcasts and public lectures vis a vis engagement.", "Opportunities for future research include the investigation of other podcast creation advice based on paralinguistic features from the podcast audio (such as pitch and intonation), speaker identities and shifts within a conversation, trajectories of linguistic features over the course of the episode, and models using manual transcripts.", "We thank Ann Clifton, Bernd Huber, Jussi Karl-gren, Mi Tian, and Zahra Nazari for their input and discussions.", "Since our dataset consists of a few thousand podcasts, uses automatically generated transcripts, and only contains podcasts from publishers owned or operated by Spotify (Clifton et al., 2020), care must be taken when generalizing from these results to deploying automatic recommendation systems, or advising podcast creators.", "It is also worth noting that aggregated engagement data may reflect the language preferences of the dominant community, and may be biased against minority cultural and linguistic subcommunities.", "While this dataset lacks self-identified labels on demographics and sociolinguistic identities, there are opportunities for future work (in either podcasts or other media) to collect these self-identifications in order to study questions such as disparities in automatic speech recognition performance by race or gender (Koenecke et al., 2020; Tatman, 2017), and whether engagement is biased towards certain dialects.", "This paper defined a specific metric, namely, the rate of streaming for at least five minutes; results related to this metric may or may not apply to other engagement metrics.", "As with all user data, the engagement metric is influenced by the interface and recommendations of the streaming platform from which the data was collected, and may not translate to other platforms, nor reflect an objective notion of listener engagement.", "We also reiterate (from 7) that listener engagement must not be used as a proxy for intrinsic quality or success.", "It must also be emphasized that the stylistic associations that were observed to distinguish high and low engagement podcasts in this particular dataset are correlations with no causality established, and therefore must be interpreted with caution." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "result", "result", "method", "result", "result", "method", "abstain", "other", "other", "other", "abstain", "other", "method", "method", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "method", "method", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain" ]
[ "Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning.", "In this work, we propose to leverage semi-structured tables, and automatically generate at scale question-paragraph pairs, where answering the question requires reasoning over multiple facts in the paragraph.", "We add a pre-training step over this synthetic data, which includes examples that require 16 different reasoning skills such as number comparison, conjunction, and fact composition.", "To improve data efficiency, we sample examples from reasoning skills where the model currently errs.", "We evaluate our approach on three reasoning-focused reading comprehension datasets, and show that our model, PReasM, substantially outperforms T5, a popular pre-trained encoder-decoder model.", "Moreover, sampling examples based on model errors leads to faster training and higher performance.", "Large pre-trained language models (LMs) (Devlin et al., 2019; Liu et al., 2019; Brown et al., 2020) have become the backbone of natural language processing in recent years.", "However, recent work has shown that they struggle in performing symbolic reasoning operations, such as composition or conjunction of facts (Talmor et al., 2019, 2020), numerical operations (Wallace et al., 2019; Hidey et al., 2020), and quantification (Warstadt et al., 2019), without substantial amounts of additional data.", "Past work on improving reasoning in pre-trained models has taken two flavors:", "(a) adding specialized components for specific skills, like numerical and temporal reasoning (Ran et al., 2019; Gupta et al., 2020a; Khot et al., 2021; Chen et al., 2020a), or", "(b) generating synthetic examples at scale, for example, by using grammars or templates (Rozen Work done while working at the Allen Institute for Artificial Intelligence.", "et al., 2019; Zhao et al., 2019; Andreas, 2020; Asai and Hajishirzi, 2020; Campagna et al., 2020), and question generation models (Alberti et al., 2019; Puri et al., 2020; Bartolo et al., 2021).", "In this work, we take the latter approach and argue that semi-structured tables are a valuable resource for automatic generation of training data that can endow LMs with reasoning skills.", "Tables can be crawled from the web at scale, and cover a wide range of domains and topics.", "Moreover, their structured nature makes them amenable to automatic processes of data generation.", "Specifically, given a table, we use templates to generate reading comprehension (RC) examples, that is, question-context-answer triplets, where answering the question requires diverse types of reasoning over facts mentioned in the context.", "Fig. 1 shows an example table, and three generated question-context-answer examples, which require fact composition, number comparison, and computing a date difference.", "Unlike prior work where semi-structured data was used for reasoning over tables or knowledge-bases (Eisenschlos et al., 2020; Yin et al., 2020; Herzig et al., 2020; Yu et al., 2021), here we harness tables to allow LMs to reason over text directly.", "Fig. 2 provides an overview of our approach.", "We generate data by crawling tables from Wikipedia, and applying 16 different example generators (EGs) on each table.", "Each EG corresponds to a particular reasoning skill (composition, numerical comparison, see Table 1 for full list), and comprises a small set of question templates.", "Variables in the templates are filled with content from the table, and the structure of the table allows to compute the answer automatically.", "The context is a list of facts generated from the table that contain facts required for answering the question as well as distractor facts.", "We add a pre-training step over this generated data, where we perform multi-task training over the 16 task corresponding to the EGs.", "Since each EG can generate vast numbers of examples, it is important to focus training on reasoning skills that the model lacks.", "Thus, we use error-driven sampling (Gottumukkala et al., 2020) to construct training batches, where most examples are sampled from EGs that the model currently struggles with.", "We fine tune our P re-traind for Reas oning M odel, PReasM, on three RC datasets that require reasoning: DROP (Dua et al., 2019), IIRC (Fergu-son et al., 2020), and MMQA (Talmor et al., 2021).", "PReasM outperforms the original pre-trained T5 (Raffel et al., 2020) model by significant margins: 7 .", "6 , 4 .", "1 , and 1 .", "2 F 1 points, respectively.", "Our results set a new state-of-the-art on MMQA and are the best results on IIRC for models where the retriever and reader are trained separately.", "Our analysis shows that PReasM leads to improvements of up to 40 F 1 points on specific question types, such as computing the difference between two dates, without causing a drop in other question types.", "In conclusion, our results suggest that tables are a viable and untapped source of information for automatically generating large amounts of data that can be used to endow LMs with skills that are not captured using current pre-training approaches.", "Our code, data, and models are publicly available and can be downloaded from https://github.", "com/oriyor/turning_tables .", "Our goal is to train a RC model that given a question q and textual context c returns an answer a , given a training set D = { ( q i , c i , a i ) } Ni =1 .", "We focus on questions that require reasoning over the context, e.g., composing two facts.", "To endow LMs with reasoning skills, we want to generate a large synthetic training set D syn = { ( q j , c j , a j ) } Mj =1 ( M N ) from semi-structured tables, before fine-tuning on a target dataset.", "We use tables from English Wikipedia 1 to generate D syn .", "English Wikipedia includes millions of tables with high lexical and domain diversity (Fe-tahu et al., 2019; Chen et al., 2020b; Gupta et al., 2020b; Talmor et al., 2021; Nan et al., 2021; Neer-aja et al., 2021a).", "We extract from Wikipedia all tables T that have at least two columns and 10-25 rows, resulting in more than 700K tables.", "Then, we annotate all table columns with their semantic type ( STRING , NUMBER , or DATE ), which allows us to generate questions that involve manipulating numbers and dates.", "Details on the process of column annotation are in A.1.", "The core of the generation process are the example generators (EGs), each corresponding to a reasoning skill (Table 1).", "Each example generator g G is a function that takes a table t T and randomly samples at most ten ( q, c, a ) triplets from the set of all possible triplets, where", "(i) q is a question is pseudo-language,", "(ii) c is the context, i.e., a list of facts extracted from t that includes the gold facts necessary for answering q and distractor facts , all phrased in pseudo-language, and", "(iii) a is the answer.", "Overall, the synthetic training set is D syn = (cid:83) t T (cid:83) g G g ( t ) .", "EGs generate examples in the following way.", "Each EG is associated with one or more question templates, which differ in their surface phrasing.", "2 Templates contain typed variables that are instantiated with content from the table (see all variables in Table 1).", "Column and value variables are indexed to specify that the variable val:i must be instantiated by a value from the column col:i .", "Instantiating all variables results in the question 2 We also experimented with using just one question template per EG and observed very similar downstream results.", "q and the template allows us to programmatically compute the answer a .", "E.g., in the question from Fig. 1: In League Cup of 199091 Chelsea F.C. season, Which Round had a higher Attendance: QF or QFR? the answer a can be found by finding the rows with the values QF and QFR in the column Round , and returning the value that has a higher number in the column Attendance .", "The context c is generated from the content necessary for answering the question, which can be identified using the instantiated question template.", "Facts generally have the form The col:1 when the col:2 was val:2 was val:1 .", "E.g., to answer the question above, we generate the gold facts The Attendance when the Round was QF was 34,178 , and The Attendance when the Round was QFR was 33,861 .", "We also generate distractors by generating facts from rows or columns that are not relevant for the question, e.g., The Attendance when the Round was R4 was 9,789 .", "Data generation yields 4.8M questions from over 176K tables and 130K pages.", "Table 2 contains examples for generated ( q, c, a ) triplets, including the full context c .", "Table 3 shows the number of generated examples for each EG.", "The number of distinct words is large (850K), illustrating the wide coverage and high lexical diversity of our approach.", "Moreover, generated examples have diverse answer types, which include text spans ( 43 . 2% ), yes/no questions ( 31 . 6% ), numeric ( 15 . 8% ), and date answers ( 9 . 4% ).", "In addition, our questions cover a wide range of domains including popular culture, politics and science.", "Tables cover more than 2,500 different Wikipedia categories, with 150 categories covering 80% of the data.", "Fig. 3 presents the most common categories of the Wikipedia pages from which we scraped our tables.", "Since our EGs generate large quantities of examples, one can think of each EG as providing an infinite stream of examples.", "In this setup, a natural question is how to construct training batches such that the model learns the required skills as quickly Figure 3: The most frequent categories of our Wikipedia pages and their frequency.", "as possible.", "After briefly describing our model, we will detail our training framework, where we sample examples from EGs in an error-driven manner.", "Model We use a standard encoder-decoder architecture (Raffel et al., 2020; Lewis et al., 2020).", "Given a training example ( q, c, a ) , the model takes as input the sequence of tokens q [SEP] c ', and the task is to autoregressively decode the answer a token-by-token.", "We train to maximize the maximum likelihood objective log P ( a | q, c ) .", "Given a pre-trained LM, we add another pretraining step, where we multi-task over a set of tasks S , each task corresponding to examples generated from one EG.", "Similar to past work (Yogatama et al., 2019; Geva et al., 2020), to avoid catas-trophic forgetting (Kirkpatrick et al., 2016) of the 6019 language skills, we sample batches from the original pre-training task with probability = 0 .", "5 .", "Past work (Gottumukkala et al., 2020) has shown that heterogeneous batching , i.e., having examples from all tasks in each batch, leads to better performance compared to having entire batches from a single task.", "We follow this practice, and in every batch sample examples from every task according to a probability distribution P tasks R |S| .", "The main question is how to determine the distribution P tasks , which we turn to next.", "Uniform sampling Past work (Khashabi et al., 2020; Raffel et al., 2020; Wang et al., 2020) used uniform sampling, where the probability to sample from a task s is P tasks ( s ) = 1 |S| , as a-priori all tasks are equally important.", "Some approaches also sample examples in proportion to the size of the training set (Raffel et al., 2020; Wang et al., 2020).", "This is not applicable in our case, where we assume an infinite stream of examples for every task, and make no assumptions on the distribution over reasoning skills in the downstream test set.", "Error sampling Recent work (Sharma et al., 2018; Gottumukkala et al., 2020) proposed to construct P tasks based on model errors, where one over-samples tasks with higher errors.", "More formally, let Ceil ( s ) be an estimate of the maximum accuracy achievable on a task s , and Acc ( s ) be the current model accuracy for task s on an held-out set.", "We define ( s ) = Ceil ( s ) Acc ( s ) and P tasks ( s ) = ( s ) (cid:80) s S ( s ) .", "The distribution P tasks of a task is updated every time we evaluate the current model on the held-out data.", "In our setup, since the data is synthetic and abundant, we assume that the ceiling accuracy for all tasks is 1 .", "0 , and hence: ( s ) = 1 .", "0 Acc ( s ) .", "Similar to Gottumukkala et al. (2020), we use accuracy over a held-out set rather than the training loss, as this corresponds directly to our target metric.", "Momentum sampling A potential issue with error sampling, is that if the error rate on a task is high, the model will spend most of its time on that task at the expense of other tasks, which may lead to low data efficiency.", "To remedy this, we introduce momentum sampling , a sampling strategy that Algorithm 1 Momentum Sampling( w , t, , k ) Input: windows size w , training time t , minimum share of examples per task , smoothing factor k .", "samples from a task in proportion to its rate of improvement , putting most probability mass on skills that are improving rapidly.", "Alg.", "1 provides the details of momentum sampling.", "Let t denote the index of a checkpoint evaluated on the held-out set, let w be a window size, and Acc s ( i ) be the held-out accuracy of checkpoint i on task s .", "We estimate model accuracy on a task s at the beginning and end of the window, and sample examples in proportion to the difference 3 in accuracy during that window.", "To smooth out accuracy fluctuations in adjacent checkpoints, we estimate accuracy as an average of k model checkpoints.", "During the first w checkpoint evaluations, we simply use uniform sampling.", "Momentum sampling has several theoretical benefits over error sampling.", "First, it does not assume anything on the ceiling accuracy of a task.", "Second, when all tasks converge to their ceiling accuracy, momentum sampling converges to uniform sampling, unlike error sampling, which will over-sample from tasks for which Ceil ( s ) is low.", "This is useful in cases where the variance of Ceil ( s ) is high across tasks.", "On the other hand, momentum sampling requires a warm-up of w steps, and might under-sample from tasks that train slowly.", "In A.2., we describe two synthetic experiments where momentum sampling clearly outperforms error sampling.", "However, we do not observe an advantage for momentum sampling in our experiments in 5, and leave further investigation of momentum sampling to future work.", "Baselines Our main baseline is T5 (Raffel et al., 2020), a popular pre-trained encoder-decoder model, which we fine-tune on the downstream", "3 We use the difference in performance and not the gain to account for cases of sudden drops in performance.", "datasets.", "We experiment with Base and Large size models.", "For each dataset, we compare to the relevant state-of-the-art model.", "Our pre-trained for reasoning model, PReasM, is a T5 model with another pre-training step on D syn .", "We experiment with uniform sampling ( PReasM-Uni ), error sampling ( PReasM-Err ), and momentum sampling ( PReasM-Moment ) strategies.", "DROP (Dua et al., 2019) is a RC dataset with questions that require numeric reasoning.", "As an additional baseline, we also compare to GenBERT (Geva et al., 2020), which similar to our approach injects numerical skills by automatically generating synthetic data from a grammar.", "IIRC (Ferguson et al., 2020) is a QA dataset, where annotators were given a single Wikipedia paragraph, and were asked to author questions that depend on that paragraph, but also on other paragraphs linked from the input paragraph.", "This resulted in questions that require discrete temporal ( 28% ) or numeric ( 11% ) reasoning.", "In addition, 30% of the questions are unanswerable.", "We experiment with IIRC in two settings:", "(a) Oracle , where the model is given the gold context, reducing the problem to RC, where we can apply our models.", "(b) Retrieval , where we use the im-proved pipelineintroduced by Ni et al. (2021) to retrieve the context, and replace the NumNet+ (Base) reader (Ran et al., 2019) used by the authors (which has specialized architecture) with T5/PReasM.", "MMQA (Talmor et al., 2021) is a QA dataset, where the input is a question and a context that consists of a table, multiple text paragraphs, and multiple images, and the model must reason over a subset of the input modalities to answer the question.", "4 We chose to use MMQA as it has many questions that involve a conjunction of facts, an operation that is largely missing from other datasets.", "Moreover, a large fractions of the questions can be answered by reasoning over the text and table only.", "Since T5/PReasM cannot handle images or very long contexts, we construct a pipeline that automatically directs some MMQA questions to T5/PReasM, and uses the original Implicit-Decomp baseline from Talmor et al. (2021) elsewhere.", "Our pipeline includes two classifiers, the first determines whether a question requires reasoning over 4 We removed tables that appear in the MMQA development and test sets from D syn .", "an image, and the second classifies whether a text paragraph is necessary to answer a question.", "Again, we experiment with an oracle and retrieval setting, such that in the oracle setting our model is presented with the gold paragraphs.", "We provide the full description of this pipeline in A.4.", "Evaluation metrics For all datasets, we use the official scripts for computing F 1 and EM, which compare the gold and predicted list of answers.", "Table 4 presents the results of our large models over all datasets, also in comparison to current state-of-the-art.", "We observe that PReasM substantially improves performance compared to T5 in all conditions, improving on the test set by 7 .", "6 , 7 .", "9 , 4 .", "1 , and 1 .", "2 F 1 points on DROP, IIRCoracle, IIRC, and MMQA respectively.", "5 We obtain new state-of-the-art results on MMQA and IIRCoracle.", "On IIRC, we improve performance when using the same retriever (Pipeline) and replacing the Num-Net+ reader with PReasM.", "6 On DROP, specialized architectures for handling numbers still substantially outperform both T5 and PReasM.", "Table 5 shows the effect of different sampling strategies.", "Error sampling and momentum sampling generally outperform uniform sampling, but there is no clear advantage to momentum sampling compared to error sampling.", "We further analyze the effect of sampling methods in 5.2.", "We now look at performance on different answer types across datasets, where PReasM leads to dramatic improvements on some types, while maintaining similar performance on other types.", "DROP Table 6 breaks down performance based on answer types: PReasM outperforms T5 across 5 To verify that the gains of PReasM over T5 are not due to knowledge memorized from D syn , we trained T5 and PReasM to generate the answer given the question only (without con-text).", "We found that the performance of T5 and PReasM is nearly identical in this setup.", "6 We report the numbers from Ni et al. (2021) ( 45 . 8 / 44 . 3 F 1 on the development/test sets).", "To fairly compare with the NumNet+ reader, we got the retrieved paragraphs for the Pipeline model through personal communication.", "However, results on these paragraphs was lower than reported in the paper: 45 .", "5 / 42 .", "8 F 1 .", "The reported results of our models are with this slightly worse retriever, but still outperform the performance of NumNet+ (Pipeline) from the original paper.", "the board for all model sizes and answer types.", "PReasM-Base outperforms GenBERT on 3 of 4 answer types.", "The high performance of GenBERT on Number questions can be explained by:", "(a) GenBERT uses digit tokenization which improves arithmetic reasoning (Thawani et al., 2021), and", "(b) training on multiple numerical reasoning templates.", "IIRC Table 7 breaks down performance based on answer types.", "PReasM outperforms T5 in the oracle setup by roughly 8 points for both Base and Large models, and by 2 .", "6 4 points in the retrieval setup.", "Improvements are mostly due to cases when the answer is a numerical Value , where PReasM outperforms T5 by 39 .", "1 and 40 .", "3 F 1 points in Base and Large models (oracle setup).", "Comparing PReasM-Base to NumNet+, PReasM outperforms NumNet+ on None , Span and Binary questions, but lags behind on Value questions, where NumNet+ uses specialized architecture.", "Overall, PReasM-Large improves state-of-the-art in the oracle setup by 4 .", "7 F 1 points.", "In the retrieval setting, PReasM outperforms NumNet+ (Pipeline) by 4 .", "2 and 0 .", "8 F 1 points on the development and test sets, respectively (see Table 4).", "MMQA Table 8 breaks down performance based on reasoning skills (annotated per example in MMQA).", "PReasM outperforms T5 in both the oracle and retrieval setting, and for both model sizes.", "The main cause for improvement are comparison questions, where PReasM outperforms T5 by 19 and 11 .", "7 F 1 points on Base and Large models.", "PReasM outperforms T5 on conjunction questions in Base models, and yes/no questions in all settings.", "Interestingly, T5 is equipped with decent composition skills, without any specialized pre-training.", "Compared to Implicit-Decomp , although Implicit-Decomp outperforms our models on questions that require hopping between two table columns and aggregations, PReasM outperforms Implicit-Decomp in all other cases.", "When considering only questions that require reasoning over text and tables, PReasM-Large improves F 1 by 16 .", "1 points, from 62 .", "3 to 78 .", "4 .", "Fig. 4 shows statistics on the performance of PReasM on different tasks in D syn during training.", "The average accuracy across all tasks at the end of training is high almost 98.0 F 1 .", "PReasM reaches high performance on all tasks, where the lowest-performing tasks are arithmetic addition' ( 91 . 1 ) and date difference' ( 94 . 7 ).", "On those tasks, the advantage of error-driven sampling is evident, and it outperforms uniform sampling by as much as 4 points.", "Zooming-in on the learning curve, momentum 6022 Model Oracle ColumnHop Text Composition Comparison Conjunction Yes/No Aggregate Total T5-Base 81.7 75.2 67.0 61.8 74.1 76.9 27.3 71.9 PReasM-Base 80.8 75.7 66.3 80.8 80.8 83.1 36.4 74.3 T5-Large 82.6 79.8 71.8 69.3 83.0 83.1 27.3 76.8 PReasM-Large 84.0 79.7 71.9 81.0 82.3 93.8 36.4 78.4 T5-Base 85.2 82.1 74.6 63.3 77.4 80.0 27.3 77.9 PReasM-Base 86.9 80.0 75.4 84.1 82.6 89.2 36.4 79.9 T5-Large 88.2 85.9 79.4 74.1 83.2 83.1 36.4 82.7 PReasM-Large 87.8 85.6 79.8 83.6 82.3 90.8 45.5 83.8 Implicit-Decomp 96.6 57.1 53.2 78.4 68.1 76.9 59.1 62.3 Table 8: Development F 1 on MMQA with reasoning type breakdown on the development set.", "and error sampling learn reasoning skills a lot faster than uniform sampling.", "Looking at the entropy of P tasks sheds light on the difference between error sampling and momentum sampling.", "Error sampling puts most probability mass on the lowest-performing task (arithmetic addition), and thus its entropy over tasks is roughly constant from a certain point.", "Conversely, momentum sampling puts a lot of probability mass on tasks that are improving quickly at the beginning, but as improvements plateau, it converges towards uniform sampling.", "Fig. 5 and Table 11 (in the Appendix) show the results for T5 and PReasM on D syn .", "The results for T5 were obtained by training in a few-shot manner on 32 examples for 200 steps, as suggested in Ram et al. (2021).", "T5-Large outperforms T5-Base on most tasks, suggesting that larger models are able to learn reasoning skills faster.", "On tasks such as date difference and arithmetic addition, the results for T5-Large are low, at around 10 F 1 .", "Our PReasM models significantly outperform T5 on all tasks.", "Reasoning skills in DROP To check which reasoning skills PReasM has, we use a proposed split of a subset of DROP to reasoning skills (Gupta et al., 2020a).", "Table 9 presents the F 1 for our best PReasM and T5 models, as well as the F 1 from Question Type NMN T5-PReasMT5-PReasMBase Base Large Large Date-Compare 82.6 86.4 87.5 87.6 89.9 Date-Difference 75.4 19.6 78.9 45.4 80.4 Number-Compare 92.7 91.3 95.2 97.3 98.5 Extract-Number 86.1 91.8 94.9 92.1 95.1 Count 55.7 80.1 86.7 86.7 89.2 Extract-Argument 69.7 87.6 86.2 90.5 92.1 Table 9: F 1 on a previously-proposed split of a subset of the development set of DROP to reasoning skills.", "the neural module network (NMN) used in Gupta et al. (2020a).", "NMN was trained only on a subset of the original DROP dataset.", "When comparing to T5, PReasM dramatically improves performance on Date-Difference, and also leads to sizable gains in Number-Compare, Extract-Number and Count.", "Accuracy vs. training cost trade-off We evaluate PReasM-Base models on DROP and IIRCoracle as we vary the number of pre-training steps on D syn (Fig. 6).", "Most of the improvement happens in the first 100K steps, and error-driven sampling outperforms uniform sampling throughout training.", "Error sampling outperforms momentum sampling in the latter part of training.", "A possible reason is that the reasoning skills in the downstream tasks are correlated with the harder tasks during pre-training (arithmetic addition and date difference).", "This provides an advantage for error 6023 Figure 5: F 1 for each task in D syn , for T5 and PReasM on the held-out evaluation set.", "sampling, since it will focus on these tasks even if the improvement during pre-training is small.", "Template-based data generation has been previously used for data augmentation, for example to inject numerical skills (Geva et al., 2020), and to improve consistency (Asai and Hajishirzi, 2020), and zero-shot accuracy (Zhao et al., 2019).", "In addition, templates were used for dataset construction (Talmor and Berant, 2018; Clark et al., 2020; Thorne et al., 2021), and to analyse model generalization (Rozen et al., 2019).", "In this work, we automatically generate examples by instantiating templates using structured data.", "Since our method relies solely on tables as input, it is highly scalable, has rich lexical diversity, and can be easily extended to new skills and domains.", "Recently, Thorne et al. (2021) introduced the WIKINLDB dataset, which includes queries that require reasoning over a set of textual facts.", "Queries are instantiated with values from a knowledge graph (KG), and facts are generated by a LM.", "Unlike this work, WIKINLDB is focused on evaluating reasoning skills.", "We, on the other hand, show that generated examples can be used to endow a pre-trained LM with new reasoning skills.", "Moreover, tables are much easier to collect at scale compared to KGs, which tend to have limited coverage.", "Data augmentation techniques have been extensively explored in RC, QA, and dialogue (Feng et al., 2021; Talmor and Berant, 2019; Khashabi et al., 2020; Alberti et al., 2019; Puri et al., 2020; Bartolo et al., 2021).", "Here, we focus on tables as a valuable source for data generation.", "Pre-training over tables has focused in the past on reasoning over tables and knowledge-bases (Eisen-schlos et al., 2020; Yin et al., 2020; Herzig et al., 2020; Mller et al., 2021; Yu et al., 2021; Neer-aja et al., 2021b).", "Here, we use pre-training over tables to improve reasoning over text .", "We leave evaluation on tasks beyond RC to future work.", "Error-driven sampling has been considered in the past in the context of active learning (Sharma et al., 2018), reinforcement learning (Graves et al., 2017; Glover and Hokamp, 2019; Xu et al., 2019), transfer learning (Zhang et al., 2020; Pilault et al., 2021), and distributionally robust optimization (Oren et al., 2019; Sagawa et al., 2020), where the goal is to perform well over a family of distributions.", "Similar to Gottumukkala et al. (2020), we compute heterogeneous batches based on error rates, and show that this improves efficiency and performance.", "We propose semi-structured tables as a valuable resource for generating examples that can endow pre-trained language models with reasoning skills.", "We generate 5M examples that correspond to 16 reasoning skills from Wikipedia tables and add a pretraining step over this data.", "To improve data efficiency we use error-driven sampling, which focuses training on reasoning skills that the model currently lacks.", "We evaluate our model, PReasM, on three reasoning-focused RC datasets and show that it leads to substantial improvements in all cases.", "We thank Elad Segal, Uri Shaham, Tomer Wolfson, and Ankit Gupta for their useful comments and James Ferguson, Ansong Ni and Matt Gardner for their help with the IIRC dataset.", "This research was partially supported by The Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800)." ]
[ "abstain", "objective", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "other", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "objective", "other", "other", "method", "objective", "other", "other", "method", "other", "abstain", "abstain", "other", "abstain", "objective", "method", "result", "result", "other", "other" ]
[ "In this paper we explore the improvement of intent recognition in conversational systems by the use of meta-knowledge embedded in intent identifiers.", "Developers often include such knowledge, structure as taxonomies, in the documentation of chatbots.", "By using neuro-symbolic algorithms to incorporate those taxonomies into embeddings of the output space, we were able to improve accuracy in intent recognition.", "In datasets with intents and example utterances from 200 professional chatbots, we saw decreases in the equal error rate (EER) in more than 40% of the chatbots in comparison to the baseline of the same algorithm without the meta-knowledge.", "The meta-knowledge proved also to be effective in detecting out-of-scope utterances, improving the false acceptance rate (FAR) in two thirds of the chatbots, with decreases of 0.05 or more in FAR in almost 40% of the chatbots.", "When considering only the well-developed workspaces with a high level use of taxonomies, FAR decreased more than 0.05 in 77% of them, and more than 0.1 in 39% of the chatbots.", "Classification of sentences into a discrete set of classes is a key part of professional conversational systems.", "In fact, most of those systems require developers to define the different classes, or intents , by enumerating exemplars of each of them, since classification is often performed using machine learning (ML) methods.", "The process of classifying an input sentence into a specific intent or signaling it as out-of-scope (OOS) of the system is often referred to as intent recognition .", "Determining a class solely on a list of exemplars is a practical method to implement ML systems but it is hardly a natural way for human beings to define a class.", "In real life, people define classes often using a rich mix of symbolic definitions, sometimes taxonomic in nature, such as in a credit card is a type of bank card, coupled with its sub-classes, for instance, basic, premium, and typical features such as international.", "People also use exemplars, card X of bank Y is a credit card, as well as particular examples to describe a sub-class, such as in card W is an international card.", "They also use counter examples, either categorically, a debit card is not a credit card, or in examples, card Z is not a credit card.", "Defining and specifying classes in the real world is, in fact, a cultural, contextual, and linguistic construct, and how people and societies perform this process is a traditional research subject in social sciences, notably in anthropology (Durkheim and Mauss, 1963; Needham, 1979; Bowker and Star, 2000).", "This paper explores algorithms for intent recognition which use both the sets of exemplars and taxonomic-like symbolic descriptions of a class to define and train intents in conversational systems using ML methods.", "We aim not only to provide methods more aligned to everyday class definition practices of developers but also to improve the accuracy of the ML methods.", "Inspired by a reverse dictionary algorithm (Kartsaklis et al., 2018) and previous work on keyword-based classification (Cavalin et al., 2020), we propose three neuro-symbolic algorithms which combine taxonomic descriptions of classes with traditional exemplar-based supervised learning.", "We show that those novel algorithms are able to decrease error rates for a significant number of datasets, particularly in the difficult task of detecting OOS cases in real, professional chatbots.", "The key idea behind our algorithms is to substitute the typical softmax used in the output layer of a ML text classifier with a space of embeddings of the taxonomic descriptions of the intents.", "The training process uses the exemplars in standard ways while the recognition process is performed using similarity distances in the embedding output space.", "This is similar to ideas used in zero-shot learning methods (Palatucci et al., 2009; Socher et al., 2013; Akata et al., 2015, 2016), in which classes defined by sub-concepts are also encoded with special embeddings to allow detection of new classes without exemplars.", "We tested our algorithms using real datasets by exploring a common practice among developers of conversational systems, who often embed symbolic knowledge as documentation in intent identifiers.", "In a previous work (Pinhanez et al., 2021), we observed a pattern among developers of using taxonomic-like structures to name the intents in which strings of reoccurring concepts are used to identify and document the different classes.", "For example, an intent about utterances where users ask for the balance of a credit card may be named checking credit card balance, while an intent related to finding out the date of payment of the balance could be identified as ask-ing credit card balance payment date.", "We call those structures intent proto-taxonomies , and real examples are shown in figures 1 and", "2. In (Pinhanez et al., 2021), we studied the use by developers of intent proto-taxonomies quantitatively and qualitatively, as well as proposed an algorithm to mine this meta-knowledge automatically, and concluded that their use is fairly common in at least one professional chatbot development platform.", "This paper focuses on the algorithms to use the meta-knowledge and on evaluating their impact on the accuracy of intent recognition.", "The paper starts by looking into the recent advances in neuro-symbolic systems and describing briefly the practice of developers of conversational systems of embedding meta-knowledge within the source code of their systems.", "We follow by describing the proposed three algorithms integrating such meta-knowledge into intent recognition ML algorithms and by evaluating them first with two typical intent recognition datasets, and then with hundreds of workspaces created in a professional tool called here ChatWorks 1 .", "The results show most of those workspaces can benefit from the techniques described in this paper, notably for OOS detection tasks, often with accuracy improvements of 5% or more solely derived from the use of the additional symbolic description from the documentation.", "The value and limits of symbolic categorization in AI have been of interest since the early days (Newell, 1973; Richards, 1982; Kosslyn, 2006).", "But our work fits more in the context of a growing belief that symbolic knowledge needs to be included in ML systems, materialized in the so called neuro-symbolic approaches (Parisotto et al., 2017; Besold et al., 2017; Tenenbaum et al., 2011; Bengio, 2017; Mao et al., 2019; Hudson and Manning, 2019a; De Raedt et al., 2019).", "Neuro-symbolic methods aim to transfer principles and mechanisms between (often nonclassical) logic-based computation and neural computa-tion (Besold et al., 2017).", "Such kind of systems are viewed by some researchers as a way to embed high-level knowledge and even some form of con-sciousness into machine learning systems, making the language to develop them closer to what passes in a man's own mind (Bengio, 2017).", "In recent years, AI has witnessed a myriad of novel neuro-symbolic techniques and their application to different problems, contexts, and scenarios (Parisotto et al., 2017; Manhaeve et al., 2018; d'Avila Garcez et al., 2019; Hudson and Manning, 2019b; De Raedt et al., 2019).", "For instance, in (Mao et al., 2019), an approach for image understanding is suggested which takes the object-based scene representations and translates sentences into executable, symbolic programs.", "In (Oltramari et al., 2020), embeddings of knowledge graphs are used as attention layers for tasks such as autonomous driving (AV) and question-answering.", "And in (Kart-saklis et al., 2018), random walks in a knowledge graph are mapped as sentence embeddings for use in an inverse dictionary problem .", "One important requirement for many neuro-symbolic systems is to represent knowledge in a structured format such as knowledge graphs, ontologies, or taxonomies (Ji et al., 2020).", "In some cases, such as the scene ontology for autonomous vehicles in (Oltramari et al., 2020), a lot of effort was needed for manual annotation.", "Nevertheless, as presented in (Fossati et al., 2015), an unsupervised approach can sometimes be used to mine the meta-knowledge introduced by the experts, such as the classes in Wikipedia pages.", "Considering our context, intent identifiers are sometimes described using high-level representations of the class as we detail later.", "This is similar to what is used in some zero-shot learning techniques (Wang et al., 2019) in which classes manually defined by sub-concepts are encoded with special embeddings so new classes can be detected without training (Palatucci et al., 2009; Socher et al., 2013; Akata et al., 2015, 2016; Chen et al., 2016).", "In (Chen et al., 2016), for example, intent identifiers can be formatted as natural language sentences to learn a model which maps training examples into those sentences, so that the meta-knowledge can be used in zero-shot learning.", "However, the dataset explored in that work is very limited.", "Recent work has also demonstrated that intent recognition can be improved by enhancing class representations with keywords which are extracted from exemplar utterances considering their most common words (Cavalin et al., 2020).", "This work focuses on high level class representations based on taxonomies and aims to explore their usefulness as enhancers of ML intent recognition algorithms.", "It also explores different ways of embedding taxonomy-like meta-knowledge considering different methods of representation.", "Most real-world, deployed conversational systems in use today have been built based on the rule-based intent-action paradigm, using platforms such as Luis.ai , Watson Assistant , or Alexa Skills .", "Each intent corresponds to a desired information or answer from the user and is defined by a set of exemplar utterances by the chatbot developers.", "During runtime, each utterance from the user is recognized as one of the defined intents or as out-of-scope (OOS), and then the associated action is outputted.", "In the context of the chatbots built using the ChatWorks platform explored in this paper, a previous work of the authors of this paper (Pinhanez et al., 2021) has shown that the curators and developers of chatbots often store symbolic knowledge in a taxonomic form about the intent classes in a documentation field called nameId .", "Figure 1 shows some examples of those nameIds, obtained from a professional finance chatbot, here translated from the original in Portuguese and anonymized to preserve confidential information.", "This practice was studied in workshops with developers (Pinhanez et al., 2021), which determined that the goal of the taxonomic description is to provide the intent classes with a summarized description of each intent.", "Such taxonomic naming Figure 1: Some nameIds of intents of a finance chatbot.", "As described in (Pinhanez et al., 2021), such knowledge-embedding practices are, in fact, fairly common among curators in the ChatWorks platform.", "Using the algorithm reproduced in appendix A, taxonomic-like symbolic knowledge was automatically extracted from workspaces defining almost 7,000 professional chatbots, in two different languages.", "By considering the different words in the nameIds as basic concepts and consecutive concepts as having connections between them, we can structure the set of nameIds as a very basic knowledge graph (Ehrlinger and Wo, 2016), hereby referred as an intent proto-taxonomy .", "Figure 2 depicts the intent proto-taxonomy associated to the nameIds in fig.", "1. Next, as proposed in (Pinhanez et al., 2021), it is possible to compute the taxonomy rate of a workspace by calculating the ratio between the number of intents with taxonomies and the total number of intents.", "In 3,840 professsional workspaces in the English language, it was found that 76% of them had a taxonomy rate above 10%, almost 52% had a taxonomy rate above 50%, and 16% had a very high taxonomy rate, above 90%.", "Moreover, the distribution followed a sort of step function where, as the threshold of 32 in the number of intents in a workspace was crossed, the majority of the workspaces had a taxonomy rate of more than 50%.", "It seems that, as the complexity of the workspace increases with the number of intents, more often developers resort to document them using an intent proto-taxonomy (see appendix B for details).", "The use in our work of the intent proto-taxonomies as a symbolic description of classes is feasible because: (1) they are part of the documentation of the conversational system, so there is no need of acquiring knowledge from experts; (2) they are easily mined, as described in appendix A. 4 Using Taxonomic Intent Descriptions to Improve Intent Recognition We present now a formal description of the methodology employed in this work which takes advantage of the intent proto-taxonomies using a neuro-symbolic approach.", "It expands some previous work which focused on the use of keywords as the source of symbolic information (Cavalin et al., 2020).", ".", "An intent classification method is a function D which maps a set of sentences (potentially infinite) S = { s 1 , s 2 , ... } into a finite set of classes = { 1 , 2 , ..., c } : D : S D ( s ) = (1)", "To enable a numeric, easier handling of the input text, an embedding : S R n is often used, mapping the space of sentences S into a vector space R n , and defining a classification function E : R n such that D ( s ) = E ( ( s )) .", "In most intent classifiers, E is composed of a function M which computes the likelihood of s being in a given class, often a neural network, followed by some sort of argmax function.", "Typically, softmax + argmax is used, noted simply as softmax here: S R n M R c softmax (2) This paper explores how to use embeddings in the output side of the classification function, that is, by embedding the set of classes into an-other vector space R m , in some ways resembling the combination of object-based recognition and symbolic programming in (Mao et al., 2019).", "Instead, we combine here standard intent recognition methods with an encoding of taxonomies in knowledge graph-like structures.", "The idea is to use class embedding functions which somehow capture the knowledge in the intent proto-taxonomies.", "Formally, we use a class embedding function : R m , its inverse 1 , and a function M : R n R m to map the two vector spaces so D ( s ) = 1 ( M ( ( s ))) .", "In this paper we explore three sentence embedding methods to implement .", "We use a two-layer neural network as M and employ the standard Mean Square Error (MSE) as the inverse 1 , to determine the closest embedding of each class i to the output of M .", "Our basic inspiration for the algorithms of this paper is a text classification method proposed in (Kart-saklis et al., 2018) for the inverse dictionary problem where text definitions of terms are mapped to the term they define.", "The embedding of the class set into the continuous vector space (equiv-alent to the function in equation 3) is done by expanding the knowledge graph of the dictionary words with nodes corresponding to words related to those terms.", "Next, random walks are perfomed on the graph to compute graph embeddings related to each dictionary node, using the DeepWalk algorithm (Perozzi et al., 2014).", "A Long Short-Term Memory (LSTM) neural network, composed of two layers and an attention mechanism, is used in (Kartsaklis et al., 2018) for mapping the input texts to the input embedding vector space.", "To map the two continuous vector spaces representing the definitions and the dictionary terms, a two-layer neural network M , learned from the training dataset, is used.", "For this work, the knowledge graph is replaced by an intent proto-taxonomy G which associates each class to a node and connects to them nodes which correspond to meta-knowledge concepts related to the class.", "To better capture the sequential aspect of the intent proto-taxonomies, we also connect each class node to bigrams of concepts, i.e., the concatenation of two subsequent concepts.", "We represent this by the function , such as () = G , which is invertible.", "Substituting this in equation 3, SLSTM R n M R m DeepWalk 1 G 1 (4) In practice, we compute the mapping from the class embedding space into the class set, called here InvG : R m , simply by determining the distance d between the output point in R m and the inverted projection of each class from and then considering the closest class.", "That is, for each w i , we consider the associated node in G and compute the mapping in R m of that node: InvG ( x ) = arg min w i d ( x, DeepWalk ( G ( w i )) (5) By substituting this function into equation 4, we obtain the algorithm we call here LSTM+T : SLSTM R n M R m InvG (6) For comparison, the traditional corresponding classification method is tested, where the graph embedding and associated functions are replaced by softmax+argmax .", "We call this LSTM : SLSTM R n M R c softmax (7) 4.3 An Alternative to LSTM: USE Recently, several new general-purpose language models that can be used for computing sentence embeddings have been proposed, among them the Universal Sentence Enconder (USE) (Cer et al., 2018).", "Such an approach consists of a transformer neural network (Vaswani et al., 2017), trained on varied sources of data, such as Wikipedia , web news, web question-answer pages and discussion forum.", "USE has achieved state-of-the-art results in various tasks, so we decided to try it in our experiments as an alternative to the LSTM for the embedding of input sentences.", "Like in the previous case, we also compute the USE algorithm with traditional discrete softmax outputs for comparison, called here simply USE :", "To explore variants of algorithms for embedding the classes and also approaches which do not need to be trained from scratch and allow on-the-fly handling of meta-knowledge, we tried replacing DeepWalk with two different methods.", "The first one consists of applying USE sentence embeddings also for the class embeddings, such as in eq.", "10.", "To simplify notation, emb represents either LSTM or USE embeddings for the input text.", "S emb R n M R m USE 1 G 1 (10) This approach is similar to the way DeepWalk works but instead of training the graph embeddings from scratch, the class embeddings are represented by the mean sentence embedding computed from different random walks starting in the class node.", "We name such methods LSTM+S and USE+S , for emb substituted by LSTM and USE, respectively.", "Additionally, we also evaluate the replacement of DeepWalk by the Convolutional Deep Structured Semantic Model (CDSSM) proposed in (Chen et al., 2016), yielding the following algorithm where emb can be either LSTM or USE embeddings.", "The CDSSM model consists of a three-layer convolutional neural network trained for creating embeddings of intent identifiers represented as sentences.", "In this work, we input to CDSSM the sequence of concepts listed in the nameId of each intent.", "We refer to these algorithms as LSTM+C and USE+C , for emb being substituted with LSTM and USE, respectively.", "An intuitive way to understand those methods is to consider USE+T using a taxonomy as if its concepts had just abstract meanings: only their relations matter.", "In comparison, USE+S considers the meaning of the concepts besides their relations, while USE+C regards each nameId as a sentence, almost as if the developer had inputted a written description of the intent.", "In this paper we are interested both in the problems of: (1) deciding whether an user utterance is in-scope (IS) or out-of-scope (OOS) of the system; and (2) determining to which class an IS utterance belongs.", "For the former, a rejection mechanism based on a pre-defined threshold is used since it can be easily applied to all of the methods described previously without the need neither for any specific training procedure nor OOS training data.", "In detail, suppose that for each class i there is a score denoted i Z , where | Z | = | | .", "Given that max( Z ) represents the highest score associated to a class and that a rejection threshold has been defined on a validation set, samples can be classified as OOS whenever max( Z ) < .", "If so, they are simply rejected, i.e., no classification output is produced for them.", "Otherwise, the sample is considered as in-scope and the classification is conducted normally.", "The scores in Z are represented either by the softmax probability for the traditional softmax-based methods or by the similarity of sentence and intent embeddings for the proposed three approaches.", "For the latter, the similarity is computed by means of the dot product between the two embeddings.", "In this section we present the experiments to evaluate the three algorithms described in the previous section, using each of the input embeddings LSTM and USE.", "We explore the impact on intent recognition both in terms of classifying correctly utterances (IS accuracy) and of finding which utterances are not covered by the intents (OOS accuracy).", "We employ a commonly-used metric for OOS dec-tection, equal error rate (EER) (Tan et al., 2019), which corresponds to the classification error rate when the threshold is set to a value where false acceptance rate (FAR) and false rejection rate (FRR) are the closest.", "These two metrics are defined as: FAR = number of accepted OOS samples total of OOS samples (12) FRR = number of rejected IS samples total of IS samples (13) In addition, in-scope error rate (ISER) is considered to report IS performance, i.e. the error rate considering only IS samples when is set to zero, similar to the class error rate in (Tan et al., 2019).", "This metric is important to evaluate whether the proposed classification methods are able to keep up with the performance of the baselines in the main classification task.", "During the development and initial testing of the algorithms, we used two English datasets for in-depth experimentation.", "The first is the publicly-available Larson dataset (Larson et al., 2019); the second is a private real-world chatbot dataset used by a telecommunications provider for customer care, called here the Telco dataset.", "In the Larson dataset, we created an intent proto-taxonomy by hand, expanding the original identifiers of intents.", "The goal of the adjustments was to avoid spurious interference from taxonomy shortcomings or errors in the results.", "The complete list of the created taxonomic description of intents is listed in the appendix C to allow the reproduction of our results and further experimentation.", "In the Telco dataset, we created by hand the intent proto-taxonomy.", "In the Larson dataset there is a total of 22,500 IS exemplars, evenly distributed across 150 classes, where 18,000 were used for training and 4,500 for testing.", "We conducted a simulation of OOS detection with the IS exemplars by doing 5 random samplings where we took out 30 intents and 3,600 training exemplars.", "We trained only with the remaining 120 intents and 14,400 exemplars.", "The test was then conducted using all the non-used 4,500 exemplars, where the 3,600 associated to the trained classes were considered the IS samples and the remaining 900 became OOS samples.", "The Telco dataset contains 4,093 exemplars and 87 intents.", "From those, 3,069 exemplars were used for training and 1,024 for testing.", "The OOS scenario was simulated by extracting different random samplings where 5 intents were removed.", "Given the smaller size of this dataset compared to Larson, we conducted 20 samplings instead of 5.", "For both sets we considered the following setup defined after preliminary evaluations.", "For the LSTM-based methods, the input sentence embedding size was set to 150 and output embeddings to 200.", "DeepWalk walk sizes were set to 20 for LSTM+T and USE+T.", "For both USEand softmax-based methods we trained a two-layer neural network with 800 hidden neurons for 50 epochs.", "The results on the Larson dataset are graphically depicted in fig.", "3. We observed that there was a slight improvement (a decrease) in EER, especially with the USE-based and the LSTM+C methods.", "More notably, there was a significant improvement in terms of FAR for all USE-based methods and LSTM+S and LSTM+C.", "Notice that even though the proposed approaches generally did not outperform LSTM and USE in ISER (except LSTM+C), we observed that the methods with better ISER tended to produce also better EER and FAR.", "In fig.", "4, we see that the results on the Telco dataset presented a different scenario.", "The proposed methods generally performed worse than or, at best, similar to LSTM and USE in EER.", "In terms of FAR, some methods such as USE+T and USE+C seem to outperform the others but, considering the high standard deviations, the results were not significant.", "On the other hand, we also observed that the methods failed to get close in ISER compared to the softmax-based methods.", "That seems to indicate that for the cases where making use of meta-knowledge harms too much ISER, the symbolic knowledge did not decrease neither EER nor FAR.", "There were two key findings from our experiments with the Larson and the Telco datasets.", "First, the improvements using LSTM or USE as a baseline seemed to be similar, possibly slightly better for the USE algorithm.", "Second, and most importantly, we saw much more improvement in the use of the intent proto-taxonomy in the Larson than in the Telco dataset, in spite of the similar nature of the datasets and the intent proto-taxonomies.", "This motivated us to try out the ideas in a larger and more diverse number of workspaces and solely focusing on USE to simplify the experiments.", "To test our algorithms in a context of high diversity and realism, we used the same large set of real, professional workspaces explored in (Pinhanez et al., 2021), which come from the professional chatbot development platform ChatWorks .", "We started with the 3,840 workspaces available in English.", "To eliminate possible problems due to workspaces with poor quality, we employed the 3 -rule, where values smaller greater than 3 standard deviations from the mean are not con-sidered.Workspaces with the number of intents or exemplars below and above those thresholds were removed.", "Also, to avoid workspaces with few exemplars per intent, the ratio of the number of exemplars to the number of intents had to be greater than 10.", "From the filtered set we randomly selected 200 workspaces for testing.", "The evaluation involved the execution of 20 iterations for each workspace.", "The tests were performed for all USE-based methods (USE, USE+T, USE+S, and USE+C).", "First, the workspaces were split into training and test datasets (75% and 25%, respec-tively).", "Next, the four methods were trained and tested on these datasets.", "The evaluation metrics (EER, FAR, and ISER) were then measured on the results for the test datasets and the average errors and their standard deviations were computed.", "Appendix D contains a table with the results for each of the 200 workspaces in the ChatWorks dataset.", "Figure 5 summarizes the results of the experiments showing the distribution of the 200 workspaces according to ranges of the improvement of each of the three methods compared to the baseline of USE.", "Improvement is calculated by subtracting the errors in each of our proposed methods from the errors in the USE baseline (error values are scores between 0 and 1).", "When one of our methods was worse than the baseline then diff < 0 , since smaller is better, and conversely for when it is better than the baseline, i.e., diff (cid:62) 0 .", "The results shown in fig.", "5 indicate that the USE+C algorithm achieved the best results in all three metrics, although there is a significant portion of workspaces where the other methods also did well, especially in OOS detection (FAR).", "But, more important, the results seem to support our claim that meta-knowledge embedded in the output layer of our neuro-symbolic algorithms can improve intent recognition performance in practical systems.", "Notably in OOS detection (FAR), 67% of the workspaces experienced a decrease in the error rate using USE+C.", "Besides, in 39% of the workspaces we observed a decrease in the error rate of more than 0.05 (in a 0 to 1 scale), and in 23%, of more than 0.1.", "The USE+T also did well with similar but slightly smaller decreases in error.", "Overall, the error rates for the EER metric also decreased in relation to the baseline.", "Figure 5 shows that 41% of the workspaces had some level of decrease in EER with the USE+C algorithm, in 10% of them with decreases of 0.05 or more.", "However, the results for the in-scope accuracy (ISER) were much smaller with only about 16% of the workspaces having any kind of decrease.", "The ChatWorks dataset, as noted before, includes all kinds of workspaces.", "Taxonomy rates varies anywhere from 0 to 1, and there are very small and very large workspaces.", "To test our methods in a scenario closer to a professional, well-developed chatbot, we filtered further the dataset to include only workspaces with taxonomy rate greater or equal to 0 .", "7 , with number of intents equal or more than 32, and at least an average of 25 exemplars per intent, resulting in 18 workspaces.", "Figure 6 shows the distribution of the results of the experiments with those 18 workspaces, which were better than in the full ChatWorks dataset.", "Both USE+T and USE+C yielded EER decreases in 50% or more of the workspaces.", "Moreover, 83% of the workspaces decreased the FAR error, either with USE+T or USE+C, and both decreased FAR in more than a third of the workspaces by more than 0.1.", "We discuss the results and implications next.", "We started this paper by proposing the combination of exemplars and symbolic characterizations of a class as a way to enhance ML-based intent recognition.", "We proposed 3 new neuro-symbolic algo-Figure 5: Distribution of performance difference (diff) to USE baseline of the 3 methods according to EER, FAR, and ISER metrics in all 200 workspaces of the Chatworks dataset.", "rithms and tested them using datasets built using data from intent identifiers of conversational systems.", "Such identifiers often store taxonomic-like structures, due to a common practice among developers of professional conversational systems (Pin-hanez et al., 2021).", "The results of the experiments indicate that the intent proto-taxonomies embedded by those developers can indeed be used by many workspaces to improve accuracy in intent recognition, notably in OOS detection.", "We see as one of the main contributions of this paper the creation of methods with which ML engineers can improve the accuracy of their systems by simply mining documentation from chatbots, without any further data and annotation.", "Our results show that almost 40% of the 200 professional workspaces drawn from ChatWorks saw decreases of more than 0.05 in OOS detection error rates.", "Also, in 42% of them the overall error rate was improved, using the USE+C algorithm.", "When considering the more well-structured and developed 18 workspaces, we saw much higher gains with the USE+T algorithm.", "Those accuracy improvements were achieved without any change in the training set but simply by incorporating the meta-knowledge into intent recognition.", "Notice that the testing methodology used in this work is considerably harder than the practice of the majority of research papers, since it evaluates performance in 200 professional, non-edited workspaces from different domains.", "In reality, most ML algorithms do not perform well in all datasets, and ML practitioners often test different algorithms and parameters until accuracy is good enough.", "However, the improvement in OOS detection (FAR) was not mirrored in classification error (ISER).", "First, we must keep in mind that intent classification is often performed in two steps, first OOS sentences detection and removal, followed by intent classification of the IS sentences.", "Given the improvements observed in OOS detection, it would make sense to use our algorithms in the first step for many of the ChatWorks workspaces (about 60% of them), and selectively use it for IS classification only when it works better than the baseline.", "But why were there so many workspaces where we did not see impact?", "It is important to take into account that the ChatWorks dataset has workspaces in different stages of development and deployment.", "By selecting better quality workspaces, we saw much higher gains.", "We explored briefly characterizations of the intent proto-taxonomy quality, such as taxonomy rate, depth of the taxonomy, and number of concepts, but we saw no clear correlation with decreases in error rates.", "We believe more complex metrics of knowledge structure need to be employed to characterize which intent proto-taxonomies are likely to have the greatest impacts.", "We plan to do so in our future work.", "It is important to notice that, in the workspaces where we did see impact, the symbolic knowledge was mined from an absolutely raw format.", "In spite of that, by using the basic graph mining method described in appendix A, it was possible to obtain a meaningful taxonomic structure, similar to a knowledge graph which could be used by our neuro-symbolic algorithms.", "To improve the quality of the taxonomies, we are working on designing an interface which allows the developers to manipulate directly the intent proto-taxonomy to make it more correct and complete, so to possibly decrease even more the intent recognition error rates.", "We have demonstrated in this work that combining exemplar and symbolic ways of defining classes can have a positive impact in the performance of the recognition system.", "This was done in the context of conversational systems where developers fortuitously embed such alternative descriptions of classes in their name identifiers.", "We believe it is possible to find in other machine learning development platforms similar patterns of knowledge embedding.", "For example, we know that it is common for people to use similar taxonomic structures when naming file and e-mail folders, giving names to functions and variables in programs and data, and writing comments into Jupyter notebooks.", "Also, ML platforms can further foster the use of meta-data by developers by explicitly asking them to input, besides exemplars, categorical or textual descriptions of the classes.", "As we move along the path of creating such neuro-symbolic systems, not only we should expect that the job of developers becomes easier, as they follow their own cultural and linguistic practices, but also that machines became better in recognizing those classes accurately.", "Using multiple forms of class definitions can be a winning proposition for both ML systems and their developers.", "The ChatWorks dataset was composed only of workspaces in which the developers explicitly opted-in to share their code and content for research and development purposes with the company which owns the platform.", "Those workspaces were shared by the company with the authors of this paper with a clear condition of not publicly sharing their contents and publishing only aggregated results or in an anonymous form.", "We do not see any specific impact of those limitations in the results of our research but they preclude easy forms of replication of our results with that dataset.", "To better enable reproducibility, we presented the analysis of the public Larson dataset and shared the intent proto-taxonomy we created manually from its original intent structure in appendix C. References Zeynep Akata, Florent Perronnin, Zaid Harchaoui, and Cordelia Schmid." ]
[ "objective", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective", "method", "abstain", "abstain", "objective", "result", "abstain", "result", "abstain", "method", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "method", "abstain", "result", "result", "result", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Previous work on automatic news timeline summarization (TLS) leaves an unclear picture about how this task can generally be approached and how well it is currently solved.", "This is mostly due to the focus on individual subtasks, such as date selection and date summarization, and to the previous lack of appropriate evaluation metrics for the full TLS task.", "In this paper, we compare different TLS strategies using appropriate evaluation frameworks, and propose a simple and effective combination of methods that improves over the state-of-the-art on all tested benchmarks.", "For a more robust evaluation, we also present a new TLS dataset, which is larger and spans longer time periods than previous datasets.", "The dataset will be made available at https://github.", "com/complementizer/news-tls .", "Timelines of news events can be useful to condense long-ranging news topics and can help us understand how current major events follow from prior events.", "Timeline summarization (TLS) aims to automatically create such timelines, i.e., temporally ordered time-stamped textual summaries of events focused on a particular topic.", "While TLS has been studied before, most works treat it as a combination of two individual subtasks, 1) date selection and 2) date summarization, and only focus on one of these at a time (Tran et al., 2013a,b, 2015b).", "However, these subtasks are almost never evaluated in combination, which leaves an unclear picture of how well TLS is being solved in general.", "Furthermore, previously used evaluation metrics for the date selection and timeline summarization tasks are not appropriate since they do not consider the temporal alignment in the evaluation.", "Just until recently, there were no established experimental settings and appropriate metrics for the full TLS task (Martschat and Markert, 2017, 2018).", "In this paper, we examine existing strategies for the full TLS task and how well they actually work.", "We identify three high-level approaches: 1) Direct summarization treats TLS like text summarization, e.g., by selecting a small subset of sentences from a massive collection of news articles; 2) The date-wise approach first selects salient dates and then summarizes each date; 3) Event detection first detects events, e.g., via clustering, selects salient events and summarizes these individually.", "The current state-of-the-art method is based on direct summarization (Martschat and Markert, 2018).", "We therefore focus on testing the two remaining strategies, which have not been appropriately evaluated yet and allow for better scalability.", "We propose a simple method to improve date summarization for the date-wise approach.", "The method uses temporal expressions (textual references to dates) to derive date vectors, which in turn help to filter candidate sentences to summarize particular dates.", "With this modification, the date-wise approach obtains improved state-of-the-1323 art results on all tested datasets.", "We also propose an event-based approach via clustering, which outperforms (Martschat and Markert, 2018) on one of three tested datasets.", "We use purpose-build evaluation metrics for evaluating timelines introduced by Martschat and Markert (2017).", "For a more robust evaluation, we also present a new dataset for TLS, which is significantly larger than previous datasets in terms of the number of individual topics and time span.", "We summarize our contributions as follows: 1. We compare different TLS strategies side-by-side using suitable evaluation metrics to provide a better picture for how well the full TLS task for news is solved so far.", "2. We propose a simple addition to existing methods to significantly improve date-wise TLS, achieving new state-of-the-art results.", "3. We present a new TLS dataset that is larger than previous datasets and spans longer time ranges (decades of news timelines).", "Timeline summarization for news articles has received some attention in the last two decades (Swan and Allan, 2000; Allan et al., 2001; Chieu and Lee, 2004; Yan et al., 2011a,b; Kessler et al., 2012; Tran et al., 2013a,b; Li and Li, 2013; Tran et al., 2015a,b; Wang et al., 2015, 2016; Martschat and Markert, 2017, 2018; Steen and Markert, 2019).", "The task is commonly split into date selection and date summarization subtasks.", "Supervised machine learning has been proposed to predict whether dates appear in ground-truth timelines (Kessler et al., 2012; Tran et al., 2013a).", "Tran et al. (2015b) use graph-based ranking of dates, which is reported to outperform supervised methods 1 .", "Several approaches construct date summaries by picking sentences from ranked lists.", "The ranking is based on regression or learning-to-rank to predict ROUGE scores between the sentence and a ground-truth summary (Tran et al., 2013a,b).", "Tran 1 Despite our best efforts, we could neither obtain code for this method from the authors nor reproduce its reported performance, and therefore did not include it in our experiments.", "et al. (2015a) observe that users prefer summaries consisting of headlines to summaries consisting of sentences from article bodies.", "Steen and Markert (2019) propose abstractive date summarization based on graph-based sentence merging and compression.", "Other works propose the use of additional data, such as comments on social media (Wang et al., 2015), or images (Wang et al., 2016).", "Chieu and Lee (2004) produce timelines by ranking sentences from an entire document collection.", "The ranking is based on summed up similarities to other sentences in an n -day window.", "Nguyen et al. (2014) propose a pipeline to generate timelines consisting of date selection, sentence clustering, and ranking.", "Martschat and Markert (2018) adapt submodular function optimization, commonly used for multi-document summarization, for the TLS task.", "The approach searches for a combination of sentences from a whole document collection to construct a timeline and is the current state-of-the-art for full TLS.", "Steen and Markert (2019) use a two-stage approach consisting of date selection and date summarization to build timelines.", "Other examples of automatic timeline generation can be found in the social media-related literature, where microblogs are often clustered before being summarized (Wang et al., 2014; Li and Cardie, 2014).", "We explore a similar framework for evaluating clustering-based TLS.", "We define the TLS setup and task as follows.", "Given is a set of news articles A , a set of query keyphrases Q , and a ground-truth (reference) timeline r , with l dates that are associated with k sentences on average, i.e., m = k l sentences in total.", "The task is to construct a (system) timeline s that contains m sentences, assigned to an arbitrary number of dates.", "A simpler and stricter setting can also be used, in which s must contain exactly l dates with k sentences each.", "A number of different high-level approaches can be used to tackle this task:", "combination (Martschat and Markert, 2018), or by sentence ranking (Chieu and Lee, 2004).", "Among these, Martschat and Markert (2018)'s solution for the full TLS task has state-of-the-art accuracy but does not scale well.", "2. Date-wise Approach : This approach selects l dates and then constructs a text summary of k sentences on average for each date.", "3. Event Detection : This approach first detects events in A , e.g., by clustering similar articles, and then identifies the l most important events and summarizes these separately.", "Since no prior work has analyzed the latter two categories for the full TLS task, we discuss and develop such approaches next.", "First, we identify the set of possible dates to include in a timeline.", "We obtain these from", "(i) the publication dates of all articles in A and", "(ii) textual references of dates in sentences in A , such as 'last Monday', or '12 April'.", "We use the tool HeidelTime 2 (Strotgen and Gertz, 2013) to detect and resolve textual mentions of dates.", "Next, we select the l most important dates.", "We compare the following date selection methods introduced by Tran et al. (2013a): PUBCOUNT : Ranking dates by the number of articles published on a date.", "MENTIONCOUNT : Ranking dates by the number of sentences that mention the date.", "SUPERVISED : Extracting date features and using classification or regression to predict whether a date appears in a ground-truth timeline.", "These features mostly include the publication count and different variants of counting date mentions.", "Our experiments show that SUPERVISED works best, closely followed by MENTIONCOUNT (Ap-pendix A.1).", "Figure 1 shows an example of publication and date mention counts and ground-truth dates over time.", "Two challenges are evident that date selection methods face: 1) These count signals usually do not perfectly correlate with ground-truth dates, and 2) high values often cluster around important dates, i.e., a correct date is often surrounded by other, incorrect dates with similarly strong signals.", "To summarize a particular date d , we first need to decide which articles or sentences we use as a source to create a summary from.", "Previous research has not explored this aspect much due to the separated treatment of subtasks.", "We propose a simple but effective heuristic to do this.", "We consider the following two sets to be the primary source of suitable candidate sentences: P d : Sentences published on or closely after d .", "These often contain initial reports of events occurring on d .", "M d : Sentences that mention d .", "These sentences are from articles published at any point in time, and may retrospectively refer to d , or announce events on d beforehand 3 .", "3 In practice, we include the first 5 sentences in the body of each article published on d and up to 2 days after d into P d", "We include all sentences found in A that mention d into M d .", "find a subset of sentences in P d M d that are likely to mention important events happening on d .", "We convert all the sentences in the collection A to sparse bag-of-words (unigram) vectors with sentence-level TF-IDF weighting.", "We represent the sets of sentences P d and M d using the mean of their respective sentence vectors, x P d and x M d .", "The core assumption of the method is that the content shared between P d and M d is a good source for summarizing events on d .", "To capture this content, we build a date vector x d , so that we can compare sentence vectors against it to rank sentences.", "We set the value of x d for each dimension i in the feature space as follows: x id = (cid:40) 1 | P d | x iP d + 1 | M d | x iM d if x iP d > 0 and x iM d > 0 0 otherwise (1) Thus the date vector x d is an average of x P d and x M d weighted by the sizes of P d and M d , with any features zeroed out if they are missing in either P d or M d .", "To rank sentences, we compute the cosine similarity between the vector x s of each candidate sentence s ( P d M d ) to x d .", "We select the best-scoring candidate sentences by defining a threshold on this similarity.", "To avoid tuning this threshold, we use a simple knee point detection method (Satopaa et al., 2011) to dynamically identify a threshold that represents the knee (or elbow) in the similarity distribution.", "This set of best-scoring sentences is then used as the input for the final date summarization step.", "To construct the final timeline, we separately construct a summary for the l highest ranked dates.", "Prior to our main experiments, we test several multi-document summarization algorithms: TEXTRANK : Runs PageRank on a graph of pairwise sentences similarities to rank sentences (Mihalcea and Tarau, 2004).", "CENTROID-RANK : Ranks sentences by their similarity to the centroid of all sentences (Radev et al., 2004).", "CENTROID-OPT : Greedily optimises a summary to be similar to the centroid of all sentences (Ghalandari, 2017).", "SUBMODULAR : Greedily optimizes a summary using submodular objective functions that represent coverage and diversity (Lin and Bilmes, 2011).", "The only modification to these algorithms in our TLS pipeline is that we prevent sentences not containing any topic keyphrases from query Q to be included in the summary.", "CENTROID-OPT has the best results (Appendix A.1) and is used in the main experiments.", "The date-wise approach constructs a timeline as follows: first, rank all potential dates using one of the date selection approaches described, then pick the l highest ranked ones, pick candidate sentences for each date, and summarize each date individually from the according candidate set, using k sentences.", "We might not be able to summarize a particular date due to the keyword constraint in the summarization step.", "Whenever this is the case, we skip to the next date in the ranked list, until l is reached.", "When humans are tasked with constructing a timeline, we expect that they reason over important events rather than dates.", "Conceptually, detecting and selecting events might also be more appropriate than selecting dates because multiple events can happen on the same day, and an event can potentially span multiple days.", "To explore this, we test a TLS approach based on event detection by means of article clustering.", "The general approach can be summarized as follows: (1) Group articles into clusters; (2) Rank and select the l most important clusters; (3) Construct a summary for each cluster.", "Similarly to the date-wise approach, this mostly consists of existing building blocks that we adapt for TLS.", "For each input collection A , we compute sparse TF-IDF unigram bag-of-words vectors for all articles in A .", "We apply clustering algorithms to these vectors.", "To cluster articles, we use Markov Clustering (MCL) with a temporal constraint.", "MCL (Van Dongen, 2000) is a clustering algorithm for graphs, i.e., a community detection algorithm.", "It is based on simulating random walks along nodes in a graph.", "Ribeiro et al. (2017) use this approach for clustering news articles.", "We convert A into a graph where nodes correspond to articles so that we can cluster the articles using MCL, with the following temporal constraint: Articles a 1 , a 2 are assigned an edge if their publication dates are at most 1 day apart from each other.", "The edge weight is set to the cosine similarity between the TF-IDF bag-of-words vectors of a 1 and a 2 .", "The constraint on the publication dates ensures that clusters do not have temporal gaps.", "Furthermore, it reduces the number of similarity computations between pairs of articles considerably.", "We run MCL on this graph and obtain clusters by identifying the connected components in the resulting connectivity matrix 4 .", "We define the cluster date as the date that is most frequently mentioned within articles of the cluster.", "We identify date mentions using the HeidelTime tool.", "To construct a timeline, we only need the l most important clusters.", "We obtain these by ranking and retaining the topl clusters of the ranked list.", "We test the following scores to rank clusters by: SIZE : Rank by the numbers of articles in a cluster.", "DATEMENTIONCOUNT : Rank by how often the cluster date is mentioned throughout the input collection.", "REGRESSION : Rank using a score by a regression model trained to predict importance scores of clusters.", "For the regression-based ranking method, we represent clusters using the following features: number of articles in a cluster; number of days between the publication dates of the first and last article in the cluster; maximum count of publication dates of articles within a cluster; maximum mention count of dates mentioned in articles in a cluster; sum of mention counts of dates mentioned in articles in a cluster.", "We test two approaches to label clusters with target scores to predict.", "Date-Accuracy: This is 1 if the cluster date appears in the ground-truth, else 0.", "ROUGE: The ROUGE-1 F1-score 5 between the summary of the cluster and the ground-truth summary of the cluster date.", "If the cluster date does not appear in the ground-truth, the score is set to 0.", "from https://github.com/GuyAllard/markov_ clustering 5 ROUGE-1 obtained a better overall performance than ROUGE-2 for this purpose.", "We evaluate these different options (Appendix A.2) and observe that ranking by DATEMENTIONCOUNT works better than the supervised methods, showing that predicting the suitability of clusters for timelines is difficult.", "We use the same multi-document summarization method that works best for the date-wise approach (CENTROID-OPT ).", "In summary, the clustering approach builds a timeline as follows: 1) cluster all articles, 2) rank clusters, 3) build a summary with k sentences for the topl clusters, skipping clusters if a summary cannot be constructed due to missing keywords.", "Furthermore, we skip clusters if the date assigned to the cluster is already used by a previously picked cluster.", "Conceptually, this implies that we can only recognize one event per day.", "In initial experiments, this leads to better results than alternatives, e.g., allowing multiple summaries of length k per day.", "Tran et al. introduced the 17 Timelines (T17) (Tran et al., 2013a) and the CRISIS (Tran et al., 2015a) datasets for timeline summarization from news articles.", "However, we see the need for better benchmarks due to 1) a small number of topics in the T17 and CRISIS datasets (9 and 4 topics respectively), and 2) relatively short time span, ranging from a few months to 2 years.", "Therefore, we build a new TLS dataset, called ENTITIES , that contains more topics (47) and longer time-ranges per topic, e.g., decades of news articles.", "In the following, we describe how we obtain ground-truth timelines and input article collections for this dataset.", "Ground-Truth Timelines : We obtain ground-truth timelines from CNN Fast Facts 6 , which has a collection of several hundred timelines grouped in categories, e.g., people' or disasters'.", "We pick all timelines of the people' category and a small number from other categories.", "Queries : For each ground-truth timeline, we define a set of query keyphrases Q .", "By default, we use the original title of the timeline as the keyphrase.", "For people entities, we use the last token of the title to capture surnames only, which increases the 6 http://edition.cnn.com/specials/ world/fast-facts 1327 coverage.", "We manually inspect the resulting sets of keyphrases and correct these if necessary.", "Input Articles : For each entity from the ground-truth timelines, we search for news articles using TheGuardian API 7 .", "We use this source because it provides access to all published articles starting from 1999.", "We search for articles that have exact matches of the queries in the article body.", "The timespan for the article search is set so that it extends the ground-truth timeline by 10% of its days before its first and after its last date.", "Adjustments and Filtering : The ground-truth timelines are modified to be usable for TLS and to ensure they do not contain data not present in the document collection: We remove entries in the ground-truth timelines if they do not specify year, month, and day of an event.", "Ground-truth timelines are truncated to the first and last date of the input articles.", "Entries in the ground-truth timeline are removed if there is no input article published within 2 days.", "Afterwards, we remove all topics from the dataset that do not fulfill the following criteria: The timeline must have at least 5 entries.", "For at least 50% of the dates present in the ground-truth timeline, textual references have to be found in the article collection (e.g., 'on Wednesday' or 'on 1 August'.).", "This is done to ensure that the content of the timelines is reflected to some degree in the article collection.", "There are at least 100 and less than 3000 articles containing the timeline-entity in the input articles.", "This is done to reduce the running time of experiments.", "Dataset Characteristics : Tables 2 and 3 give an overview of properties of the two existing datasets and our new dataset, and mostly show averaged values over tasks in a dataset.", "An individual task corresponds to one ground-truth timeline that a TLS algorithm aims to simulate.", "# P ubDates refers to the number of days in an article collection A on which any articles are published.", "The compression ratio w.r.t. sentences (comp.", "ratio (sents)) is m divided by the total number of sentences in A , 7 http://open-platform.theguardian.com/ and the compression ratio w.r.t dates is l divided by # P ubDates .", "Avg.", "date cov refers to the average coverage of dates in the ground-truth timeline r by the articles in A .", "This can be counted by using publication dates in A , (published), or by textual date references to dates within articles in A (mentioned).", "The fact that there are generally more ground-truth dates covered in textual date references compared to publication dates suggests making use of these date mentions.", "T17 has longer ( l ), and more detailed ( k ) timelines than the other datasets, CRISIS has more articles per task, and ENTITIES has more topics, publication dates and longer time periods per task.", "In our experiments, we measure the quality of generated timelines with the following two evaluation metrics, which are also used by Martschat and Markert (2018):", "Alignment-based ROUGE F1-score: This metric compares the textual overlap between a system and a ground-truth timeline, while also considering the assignments of dates to texts.", "Date F1-score: This metric compares only the dates of a system and a ground-truth timeline.", "We denote the alignment-based ROUGE-1 F1-score as AR1-F and Date F1-score as Date-F1.", "Concerning the datasets and task, we follow the experimental settings of Martschat and Markert (2018):", "Each dataset is divided into multiple topics, each having at least one ground-truth timeline.", "If a topic has multiple ground-truth timelines, we split the topic into multiple tasks.", "The final results in the evaluation are based on averages over tasks/ground-truth timelines, not over topics.", "Each task includes a set of news articles A , a set of keyphrases Q , a ground-truth timeline r , with number of dates (length) l , average number of summary sentences per date k , and total number of summary sentences m = l k .", "In each task, we remove all articles from A whose publication dates are outside of the range of dates of the ground-truth timeline r of the task.", "Article headlines are not used.", "We run leave-one-out cross-validation over all tasks of a dataset.", "We test for significant differences using an approximate randomization test (Marcus et al., 1993) with a p-value of 0.05.", "We use the following configurations for our methods: A stricter and simpler version of the output size constraint: We produce timelines with the number of dates l and k sentences per date.", "In the summarization step of our methods, we only allow a sentence to be part of a summary if it contains any keyphrase in Q .", "As opposed to Martschat and Markert (2018), we still keep sentences not matching Q , e.g., for TF-IDF computation, clustering, and computing date vectors.", "We compare the following types of methods address the full news TLS task.", "Direct summarization approaches: CHIEU 2004: Chieu and Lee (2004) An unsupervised baseline based on direct summarization.", "We use the reimplementation from Martschat and Markert (2018).", "MARTSCHAT 2018: Martschat and Markert (2018) State-of-the-art method on the CRISIS and T17 datasets.", "It greedily selects a combination of sentences from the entire collection A maximizing submodular functions for content coverage, textual and temporal diversity, and a high count of date references 8 .", "TRAN 2013 (Tran et al., 2013a): The original date-wise approach, using regression for both date selection and summarization, and using all sentences of a date as candidate sentences.", "PUBCOUNT : A simple date-wise baseline that uses the publication count to rank dates, and all sentences published on a date for candidate selection.", "We use CENTROID-OPT for summarization.", "DATEWISE : Our date-wise approach after testing different building blocks (see Appendix A.1).", "It uses supervised date selection, PM-MEAN for candidate selection and CENTROIDOPT for summarization.", "CLUST : We use DATEMENTIONCOUNT to rank clusters, and CENTROID-OPT for summarization, which are the best options according to our tests (see Appendix A.2).", "To interpret the alignment-based ROUGE scores better and to approximate their upper bounds, we measure the performance of three different oracle methods:", "8 Multiple variants of this approach were introduced in the paper.", "We picked the variant called AsMDS+ f TempDiv + f DateRef due to its good results.", "DATEORACLE : Selects the correct (ground-truth) dates and uses CENTROID-OPT for date summarization.", "TEXTORACLE : Uses regression to select dates, and then constructs a summary for each date by optimizing the ROUGE to the ground-truth summaries.", "FULLORACLE : Selects the correct dates and constructs a summary for each date by optimizing the ROUGE to the ground-truth summaries.", "We give more detail about these in Appendix A.3.", "Table 4 shows the final evaluation results.", "We reproduced the results of CHIEU 2004 and MARTSCHAT 2018 reported by Martschat and Markert (2018) using their provided code 9 .", "The other results are based on our implementations.", "Table 10 in Appendix A.6 shows several output examples across different methods.", "Among the methods evaluated, DATEWISE consistently outperforms all other methods on all tested datasets in the alignment-based ROUGE metrics.", "The Date-F1 metric for this method is close to other methods, and not always better, which shows that the advantage of DATEWISE is due to the sentence selection (based on our heuristic date vectors) and summarization.", "Note that the date selection method is identical to TRAN 2013.", "We conclude from these results that the expensive combinatorial optimization used in MARTSCHAT 2018 is not necessary to achieve high accuracy for news TLS.", "CLUST performs worse than DATEWISE and MARTSCHAT 2018, except on ENTITIES , where it outperforms MARTSCHAT 2018.", "We find that for the other two datasets, CLUST often merges articles from close dates together that would belong to separate events on ground-truth timelines, which may suggest that a different granularity of clusters is required depending on the task.", "DATEORACLE and FULLORACLE should theoretically have a 100% Date-F1.", "In practice, their Date-F1 scores turn out lower because, for some dates, no candidate sentences that match query Q 9 With the exception of CRISIS due to memory issues.", "can be found, which causes the dates to be omitted from the oracle timelines.", "While the ranking of methods is fairly stable, the performance of all methods varies a lot across the datasets and across individual tasks within datasets.", "To find out what makes individual tasks difficult, we measure the Spearman correlation between AR1-F and several dataset statistics.", "The details are included in Appendix A.5.", "The correlations show that a high number of articles and publication dates and a low compression ratio w.r.t to dates generally decreases performance.", "This implies that highly popular topics are harder to summarize.", "The duration of a topic also corresponds to lower performance, but in a less consistent pattern.", "The generally low performance across tasks and methods is likely influenced by the following factors: The decision for human editors to include particular events in a timeline and to summarise these in a particular way can be highly subjective.", "Due to the two-stage nature of TLS, this problem is amplified in comparison to regular text summarization.", "Article collections can be insufficient to cover every important event of a topic, e.g., due to the specific set of news sources or the search technique used.", "DATEWISE and CLUST are up to an order of magnitude faster to run than MARTSCHAT 2018 (Ap-pendix A.4) since their date summarization steps only involve a small subset of sentences in an article collection.", "Automatically constructed timelines often contain a high amount of multiple adjacent dates, while this is not the case in ground-truth timelines.", "Summaries of such adjacent dates often tend to refer to the same event and introduce redundancy into a timeline.", "To quantify this, we count the proportion of those date bigrams in a chronologically ordered timeline, which are only 1 day apart.", "The results (see Table 5) show that this is an issue 1330 T17 Dataset AR1-F AR2-F Date-F1 Text Oracle 0.198 0.073 0.541 Date Oracle 0.179 0.057 0.926 Full Oracle 0.312 0.128 0.926 CHIEU 2004 0.066 0.019 0.251 MARTSCHAT 2018 0.105 0.03 0.544 TRAN 2013 0.094 0.022 0.517 PUBCOUNT 0.105 0.027 0.481 DATEWISE 0.12 (cid:63) 0.035 (cid:63) 0.544 (cid:63) CLUST 0.082 0.020 0.407 DATEWISE (titles) -CRISIS Dataset AR1-F AR2-F Date-F1 0.136 0.052 0.297 0.202 0.063 0.974 0.367 0.15 0.974 0.052 0.012 0.142 0.075 0.016 0.281 0.054 0.011 0.289 0.067 0.012 0.233 0.089 (cid:63) 0.026 (cid:63) 0.295 0.061 0.013 0.226 0.072 0.016 0.287 ENTITIES Dataset AR1-F AR2-F Date-F1 0.069 0.023 0.20 0.17 0.047 0.757 0.232 0.075 0.757 0.036 0.01 0.102 0.042 0.009 0.167 0.042 0.012 0.184 0.033 0.009 0.107 0.057 (cid:63) 0.017 (cid:63) 0.205 (cid:63) 0.051 0.015 0.174 0.057 0.017 0.194 Table 4: Results on the full TLS task.", "for MARTSCHAT 2018 and DATEWISE , but less so for CLUST , which is designed to avoid this behavior.", "Note that MARTSCHAT 2018 includes an objective function to reward diversity within a timeline, while DATEWISE has no explicit mechanism against redundancy among separate dates.", "Interestingly, when forcing DATEWISE to avoid selecting adjacent dates (by skipping such dates in the ranked list), the performance in all metrics decreases.", "In this case, high redundancy is a safer strategy for optimizing TLS metrics compared to enforcing a more balanced spread over time.", "Because of such effects, we advise to use automated evaluation metrics for TLS with care and to conduct qualitative analysis and user studies where possible.", "While using article titles can make timelines more readable and understandable (Tran et al., 2015a), we do not involve titles in our main experiments, in order to directly compare to MARTSCHAT 2018, and due to the lack of titles in T17.", "The last row in Table 4 shows the results of a separate experiment with DATEWISE in which we build date summaries using titles only.", "Using only titles generally increases AR Precision at the cost of Recall.", "AR-F is negatively affected in CRISIS but does not change in ENTITIES .", "Figure 1 shows parts of a title-based timeline produced by DATEWISE .", "In this study, we have compared and proposed different strategies to construct timeline summaries of long-ranging news topics: the previous state-of-the-art method based on direct summarization, a date-wise approach, and a clustering-based approach.", "By exploiting temporal expressions, we have improved the date-wise approach and yielded new state-of-the-art results on all tested datasets.", "Hence, we showed that an expensive combinatorial search over all sentences in a document collection is not necessary to achieve good results for news TLS.", "For a more robust and diverse evaluation, we have constructed a new TLS dataset with a much larger number of topics and with longer time-spans than in previous datasets.", "Most of the generated timelines are still far from oracle timeline extractors and leave large gaps for improvements.", "Potential future directions include a more principled use of our proposed heuristic for detecting content relevant to specific dates, the use of abstractive techniques, a more effective treatment of the redundancy challenge, and extending the new dataset with multiple sources.", "This work was funded by the Irish Research Council (IRC) under grant number EBPPG/2018/23, the Science Foundation Ireland (SFI) under grant number 12/RC/2289 P2 and the enterprise partner Aylien Ltd." ]
[ "abstain", "abstain", "objective", "objective", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "abstain", "objective", "other" ]
[ "Question-answering (QA) data often encodes essential information in many facets.", "This paper studies a natural question: Can we get supervision from QA data for other tasks (typ-ically, non-QA ones)?", "For example, can we use QAMR (Michael et al., 2017) to improve named entity recognition?", "We suggest that simply further pre-training BERT is often not the best option, and propose the question-answer driven sentence encoding ( QUASE ) framework.", "QUASE learns representations from QA data, using BERT or other state-of-the-art contextual language models.", "In particular, we observe the need to distinguish between two types of sentence encodings, depending on whether the target task is a singleor multi-sentence input; in both cases, the resulting encoding is shown to be an easy-to-use plugin for many downstream tasks.", "This work may point out an alternative way to supervise NLP tasks.", "1 1 Introduction It is labor-intensive to acquire human annotations for NLP tasks which require research expertise.", "For instance, one needs to know thousands of semantic frames in order to provide semantic role labelings (SRL) (Palmer et al., 2010).", "It is thus an important research direction to investigate how to get supervision signals from indirect data and improve one's target task.", "This paper studies the case of learning from question-answering (QA) data for other tasks (typically not QA).", "We choose QA because (1) a growing interest of QA has led to many large-scale QA datasets available to the community; (2) a QA task often requires comprehensive understanding of language and may encode rich information that Part of this work was done while the author was at the University of Illinois at Urbana-Champaign.", "is useful for other tasks; (3) it is much easier to answer questions relative to a sentence than to annotate linguistics phenomena in it, making this a plausible supervision signal (Roth, 2017).", "There has been work showing that QA data for task A can help another QA task T , conceptually by further pre-training the same model on A (an often larger) before training on T (a smaller) (Talmor and Berant, 2019; Sun et al., 2019).", "However, it remains unclear how to use these QA data when the target task does not share the same model as the QA task, which is often the case when the target task is not QA.", "For instance, QA-SRL (He et al., 2015), which uses QA pairs to represent those predicate-argument structures in SRL, should be intuitively helpful for SRL parsing, but the significant difference in their surface forms prevents us from using the same model in both tasks.", "The success of modern language modeling techniques, e.g., ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), and many others, has pointed out an alternative solution to this problem.", "That is, to further pre-train 2 a neural language model (LM) on these QA data in certain ways , obtain a sentence encoder, and use the sentence encoder for the target task, either by fine-tuning or as additional feature vectors.", "We call this general framework question-answer driven sentence encoding ( QUASE ) .", "A straightforward implementation of QUASE is to first further pre-train BERT (or other LMs) on the QA data in the standard way, as if this QA task is the target, and then fine-tune it on the real target task.", "This implementation is technically similar to STILTS (Phang et al., 2018), except that 2 We clarify three types of training: pre-training, further pre-training, and fine-tuning.", "Pre-training refers to the training of sentence encoders on unlabeled text; further pre-training refers to continuing training the sentence encoders on an intermediate, non target-task-specific labeled data (e.g. QA data); fine-tuning refers to training on the target task in the fine-tuning approach.", "STILTS is mainly further pre-trained on textual entailment (TE) data.", "However, similar to the observations made in STILTS and their follow-up works (Wang et al., 2019), we find that additional QA data does not necessarily help the target task using the implementation above.", "While it is unclear how to predict this behaviour , we do find that this happens a lot for tasks whose input is a single sentence, e.g., SRL and named entity recognition (NER), instead of a sentence pair, e.g., TE.", "This might be because QA is itself a paired-sentence task, and the implementation above (i.e., to further pre-train BERT on QA data) may learn certain attention patterns that can transfer to another paired-sentence task more easily than to a single-sentence task.", "Therefore, we argue that, for single-sentence target tasks, QUASE should restrict the interaction between the two sentence inputs when it further pre-trains on QA data.", "We propose a new neural structure for this and name the resulting implementation s-Q UASE, where s stands for single; in contrast, we name the straightforward implementation mentioned above p-Q UASE for paired.", "Results show that s-Q UASE outperforms p-Q UASE significantly on 3 single-sentence tasksSRL, NER, and semantic dependency parsing (SDP)indicating the importance of this distinction.", "Let QUASEA be the QUASE further pre-trained on QA data A .", "We extensively compare 6 different choices of A : TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2017), SQuAD (Ra-jpurkar et al., 2016), relation extraction (RE) dataset in QA format (QA-RE for short) (Levy et al., 2017), Large QA-SRL (FitzGerald et al., 2018), and QAMR (Michael et al., 2017).", "Interestingly, we find that if we use s-Q UASE for single-sentence tasks and p-Q UASE for paired-sentence tasks, then QUASEQAMR improves all 7 tasks 3 in low resource settings, with an average error reduction rate of 7.1% compared to BERT.", "4 While the set of tasks we experimented with here is nonexhaustive, we think that QUASEQAMR has the potential of improving on a wide range of tasks.", "This work has three important implications.", "First, it provides supporting evidence to an important alternative to supervising NLP tasks: using QA to annotate language, which has been discussed in works such as QA-SRL, QAMR, and 3 SRL, SDP, NER, RE, co-reference resolution (Coref), TE and machine reading comprehension (MRC).", "QA-RE.", "If it is difficult to teach annotators the formalism of a certain task, perhaps we can instead collect QA data that query the target phenomena and thus get supervision from QA for the original task (and possibly more).", "Second, the distinction between s-Q UASE and p-Q UASE suggests that sentence encoders should consider some properties of the target task (e.g., this work distinguishes between singleand multi-sentence tasks).", "Third, the good performance of QUASEQAMR suggests that predicate-argument identification is an important capability that many tasks rely on; in contrast, many prior works observed that only language modeling would improve target tasks generally.", "This work aims to find an effective way to use readily available QA data to improve a target task that is typically not QA.", "A natural choice nowadays given the success of language modelsis to further pre-train sentence encoders, e.g. BERT, on QA data in certain ways , and then use the new encoder in a target task.", "This general framework is called QUASE in this work, and the assumption is that the sentence encoders learned from QA data have useful information for the target task.", "A straightforward implementation of QUASE is to further pre-train BERT on QA data in the standard way , i.e., fine-tune BERT as if this QA dataset is the target task, and then fine-tune BERT on the real target task.", "However, we find that this straightforward implementation is less effective or even negatively impacts target tasks with single-sentence input; similar observations were also made in STILTS (Phang et al., 2018) and its follow-ups (Wang et al., 2019): They further pretrain sentence encoders, e.g., ELMo, BERT, and GPT (Radford et al., 2018), on TE data and find that it is not effective for the syntax-oriented CoLA task and the SST sentiment task in GLUE, which are both single-sentence tasks (Wang et al., 2018).", "One plausible reason is that the step of further pre-training on QA data does not take into account some properties of the target task, for instance, the number of input sentences.", "QA is inherently a paired-sentence task; a typical setup is, given a context sentence and a question sentence, predict the answer span.", "Further pre-training BERT on QA data will inevitably learn how to attend to the context given the question.", "This is preferable when the target task is also taking a pair of sentences [CLS] Q 1 Q 2 QM [SEP] Question [CLS] S 1 S 2 SN [SEP] Sentence BERT BERT T[CLS] T 1 T 2 TM T[SEP] T'[CLS] T' 1 T' 2 T' N T'[SEP] Sentence2Question and Question2Sentence Attention G[CLS] G 1 G 2 GM G[SEP] Question Modeling Sentence Modeling U[CLS] U 1 U 2 UM U[SEP] H[CLS] H 1 H 2 HN H[SEP] Interaction Layer C[CLS] C 1 C 2 CN C[SEP] Classification Layer START/END s-QuASE", "as input, while it may be irrelevant or harmful for single-sentence tasks.", "It points out that we may need two types of sentence encodings when further pre-training BERT on QA data, depending on the type of the target task.", "The following subsection discusses this issue in detail.", "Standard sentence encoding is the problem of converting a sentence S =[ w 1 , w 2 , , w n ] to a sequence of vectors h ( S ) =[ h 1 , h 2 , , h n ] (e.g., skip-thoughts (Kiros et al., 2015)).", "Ideally, h ( S ) should encode all the information in S , so that it is task-agnostic: given a target task, one can simply probe h ( S ) and retrieve relevant information.", "In practice, however, only the information relevant to the training task of h ( S ) is kept.", "For instance, when we have a task with multi-sentence input (e.g., QA and TE), the attention pattern A among these sentences will affect the final sentence encoding, which we call h A ( S ) ; in comparison, we denote the sentence encoding learned from single-sentence tasks by h ( S ) , since there is no cross-sentence attention A .", "In a perfect world, the standard sentence encoding h ( S ) expresses also the conditional sentence encoding h A ( S ) .", "However, we believe that there is a trade-off between the quality and the quantity of semantic information a model can encode.", "Our empirical results corroborate this conclusion and more details can be found in Appendix A.2.", "The distinction between the sentence encodings types may explain the negative impact of using QA data for some single-sentence tasks: Further pre-training BERT on QA data essentially produces a sentence encoding with cross-sentence attentions h A ( S ) , while the single-sentence tasks expect h ( S ) .", "These two sentence encodings may be very different: One view is from the theory of information bottleneck (Tishby et al., 1999; Tishby and Zaslavsky, 2015), which argues that training a neural network on a certain task is extracting an approximate minimal sufficient statistic of the input sentences with regard to the target task; information irrelevant to the target task is maximally compressed.", "In our case, this corresponds to the process where the conditional sentence encoding compresses the information irrelevant to the relation, which will enhance the quality but reduce the quantity of the sentence information.", "In order to fix this issue, we need to know how to learn h ( S ) from QA data.", "However, since QA is a paired-sentence task, the attention pattern between the context sentence and the question sentence is important for successful further pre-training on QA.", "Therefore, we propose that if the target task is single-sentence input, then further pre-training on QA data should also focus on single-sentence encodings in the initial layers; the context sentence should not interact with the question sentence until the very last few layers.", "This change is expected to hurt the capability to solve the auxiliary QA task, but it is later proved to transfer better to the target task.", "This new treatment is called s-Q UASE with s representing single-sentence, while the straightforward implementation mentioned above is called p-Q UASE where p means paired-sentence.", "The specific structures are shown in Fig. 1. 2.2.1 s-Q UASE The architecture of s-Q UASE is shown in Fig.", "1(a).", "When further pre-training it on QA data, the context sentence and the question sentence are fed into two pipelines.", "We use the same Sentence2Question and Question2Sentence attention as used in BiDAF (Seo et al., 2017).", "Above that, Sentence Model-ing, Question Modeling, and Interaction Layer are all bidirectional transformers (Vaswani et al., 2017) with 2 layers, 2 layers, and 1 layer, respectively.", "Finally, we use the same classification layer as BERT, which is needed for training on QA data.", "Overall, this implementation restricts interactions between the paired-sentence input, especially from the question to the context, because when serving the target task, this attention will not be available.", "Using s-Q UASE in target tasks.", "Given a sentence S , s-Q UASE can provide a sequence of hidden vectors h ( S ) , i.e., the output of the Sentence Modeling layer in Fig.", "1(a).", "Although h ( S ) does not rely on the question sentence, h ( S ) is optimized so that upper layers can use it to handle those questions in the QA training data, so h ( S ) indeed captures information related to the phenomena queried by those QA pairs.", "For single-sentence tasks, we use h ( S ) from s-Q UASE as additional features, and concatenate it to the word embeddings in the input layer of any specific neural model.", "5 2.2.2 p-Q UASE The architecture of p-Q UASE is shown in Fig.", "1(b), which is the standard way of pre-training BERT.", "That is, when further pre-training it on QA data, the context sentence and the question sentence form a single sequence (separated by special tokens) and are fed into BERT.", "5 We mainly use concatenation in both types of QUASE.", "However, we also use replacement in some experiments and we will note these cases later in this paper.", "Using p-Q UASE in target tasks.", "Given a sentence pair S (concatenated), p-Q UASE produces h A ( S ) , i.e., the output of the BERT module in Fig.", "1(b).", "One can of course continue fine-tuning p-Q UASE on the target task, but we find that adding p-Q UASE to an existing model for the target task is empirically better (although not very significant); specifically, we try to add h A ( S ) to the final layer before the classification layer, and we also allow p-Q UASE to be updated when training on the target task, although it is conceivable that other usages may lead to even stronger results.", "For instance, when the target task is token classification, e.g., MRC, we can simply concatenate the vectors of h A ( S ) at each timestamp to any existing model; when the target task is sentence classification, e.g., TE, we apply max-pooling and average-pooling on h A ( S ) , respectively, and concatenate the two resulting vectors to any existing model before the final classification layer.", "Modern LMs are essentially sentence encoders pre-trained on unlabeled data and they outperform early sentence encoders such as skip-thoughts (Kiros et al., 2015).", "While an LM like BERT can handle lexical and syntactic variations quite well, it still needs to learn from some annotations to acquire the definition of many tasks, especially those requiring complex semantics (Tenney et al., 2019).", "Although we extensively use BERT here, we think that the specific choice of LM is orthogonal to our proposal of learning from QA data.", "Stronger LMs, e.g., RoBERTa (Liu et al., 2019) or XLNet (Yang et al., 2019), may only strengthen the proposal here.", "This is because a stronger LM represents unlabeled data better, while the proposed work is about how to represent labeled data better.", "CoVe (McCann et al., 2017) is another attempt to learn from indirect data, translation data specifically.", "However, it does not outperform ELMo or BERT in many NLP tasks (Peters et al., 2018) and probing analysis (Tenney et al., 2019).", "In contrast, our QUASE will show stronger experimental results than BERT on multiple tasks.", "In addition, we think QA data is generally cheaper to collect than translation data.", "The proposed work is highly relevant to Phang et al. (2018) and their follow-up works (Wang et al., 2019), which use further pre-training on data-rich intermediate supervised tasks and aim Single-sentence Paired-sentence System SRL RE TE MRC BERT 34.17 62.99 78.29 79.90 BERTQAMR 32.92 50.16 78.73 82.96 Table 1: The naive way of training BERT on QAMR (BERTQAMR ) negatively impacts single-sentence tasks.", "to improve another target task.", "The key differences are as follows: First, we distinguish two types of sentence encodings, which provide explanation to their puzzle that sentence-pair tasks seem to benefit more from further pre-training than single-sentence tasks do.", "Second, they only focus on fine-tuning based methods which cannot be easily plugged in many single-sentence tasks such as SRL and Coref, while we analyze both fine-tuning based and feature-based approaches.", "Third, they mainly use TE signals for further pre-training, and evaluate their models on GLUE (Wang et al., 2018) which is a suite of tasks very similar to TE.", "Our work instead makes use of QA data to help tasks that are typically not QA.", "Fourth, from their suite of further pre-training tasks, they observe that only further pre-training on language modeling tasks has the power to improve a target task in general, while we find that QAMR may also have this potential, indicating the universality of predicate-argument structures in NLP tasks.", "Our work is also related to Sentence-BERT (Reimers and Gurevych, 2019) in terms of providing a better sentence representation.", "However, their focus was deriving semantically meaningful sentence embeddings that can be compared using cosine-similarity, which reduces the computational cost of finding the most similar pairs.", "In contrast, QUASE provides a better sentence encoder in the same format as BERT (a sequence of word embeddings) to better support tasks that require complex semantics.", "In this section, we conduct thorough experiments to show that QUASE is a good framework to get supervision from QA data for other tasks.", "We first give an overview of the datasets and models used in these experiments before diving into the details of each experiment.", "Specifically, we use PropBank (Kingsbury and Palmer, 2002) (SRL), the dataset from the SemEval'15 shared task (Oepen et al., 2015) with DELPH-IN MRS-Derived Semantic Dependencies target representation (SDP), CoNLL'03 (Tjong Kim Sang and De Meulder, 2003) (NER), the dataset in SemEval'10 Task 8 (Hendrickx et al., 2009) (RE), the dataset in the CoNLL'12 shared task (Pradhan et al., 2012) (Coref), MNLI (Williams et al., 2018) (TE), and SQuAD 1.0 (Ra-jpurkar et al., 2016) (MRC).", "In Table 4, we use CoNLL'12 English subset of OntoNotes 5.0 (Prad-han et al., 2013), which is larger than PropBank.", "The performance of TE and MRC is evaluated on the development set.", "6 For single-sentence tasks, we use both simple baselines (e.g., BiLSTM and CNN; see Appendix B.1) and near-state-of-the-art models published in recent years.", "As in ELMo, we use the deep neural model in He et al. (2017) for SRL, the model in Peters et al. (2018) for NER, and the end-to-end neural model in Lee et al. (2017) for Coref.", "We also use the biaffine network in Dozat and Manning (2018) for SDP but we removed part-of-speech tags from its input, and the attention-based BiLSTM in Zhou et al. (2016) is the strong baseline for RE.", "In addition, we replace the original word embeddings in these models (e.g., GloVe (Pennington et al., 2014)) by BERT.", "Throughout this paper, we use the pre-trained case-insensitive BERT-base implementation.", "More details on our experimental setting can be found in Appendix B, including the details of simple models in B.1, some common experimental settings of QUASE in B.2, and s-Q UASE combined with other SOTA embeddings (ELMo and Flair (Akbik et al., 2018)) in B.3.", "We first consider a straightforward method to use QA data for other tasksto further pre-train BERT on these QA data.", "We compare BERT further pre-trained on QAMR (denoted by BERTQAMR ) with BERT on two single-sentence tasks (SRL and RE) and two paired-sentence tasks (TE and MRC).", "We use a feature-based approach for single-sentence tasks and a fine-tuning approach for paired-sentence tasks.", "The reason is two-fold.", "On the one hand, current SOTAs of all single-sentence tasks considered in this paper are still 6 For TE, we mean matched examples in MNLI.", "feature-based.", "How to efficiently use sentence encoders (e.g. BERT) in a fine-tuning approach for some complicated tasks (e.g. SRL and SDP) is unclear.", "On the other hand, the fine-tuning approach shows great advantage over feature-based on many paired-sentence tasks (e.g. TE and MRC).", "Similar to Phang et al. (2018), we find in Table 1 that the two single-sentence tasks benefit less than the two paired-sentence tasks from BERTQAMR , which indicates that simply further pre-training BERT is not enough.", "We then compare s-Q UASEQAMR and p-Q UASEQAMR on three single-sentence tasks (SRL, SDP and NER) and two paired-sentence tasks (TE and MRC) to show that it is important to distinguish two types of sentence representations.", "Rather than concatenating two embeddings as proposed in Sec. 2.2, here we replace BERT embeddings with QUASE embeddings for convenience.", "The results are shown in Table 2. We find that s-Q UASE has a great advantage over p-Q UASE on single-sentence tasks and p-Q UASE is better than s-Q UASE on paired-sentence tasks.", "The proposal of two types of sentence encoders tackles the problem one may encounter when there is only further pre-training BERT on QAMR for single-sentence tasks.", "In summary, it is necessary to distinguish two types of sentence representations for single-sentence tasks and paired-sentence tasks.", "To see whether adding QUASE to BERT reduces the sample complexity, we compare QUASEQAMR with BERT on one single-sentence task (SRL) and one paired-sentence task (MRC) with different percentages of training examples.", "For convenience, we replace BERT embeddings with QUASE embeddings for SRL.", "As shown in Figure 2, we find that s-Q UASEQAMR outperforms BERT on SRL with small training data, and p-Q UASEQAMR outperforms BERT on MRC with small training data.", "The results support that (1) adding QUASE to BERT reduces the sample complexity, (2) QUASE is very important in the low-resource setting.", "For instance, s-Q UASEQAMR achieves an F1 score of 61 in SRL with 30% ( 27 K ) training examples (compared to 50 . 92 F1 by BERT).", "And p-Q UASEQAMR achieves 69 .", "81 average F1 on MRC with 0 .", "1% (about 100 ) training examples (com-pared to 13 . 29 F1 by BERT).", "We compare BERT with QUASE further pre-trained with the same numbre of QA pairs on 6 different QA datasets (TriviaQA (Joshi et al., 2017), NewsQA (Trischler et al., 2017), SQuAD, QA-RE (Levy et al., 2017), Large QA-SRL (FitzGerald et al., 2018), and QAMR).", "s-Q UASE further pre-trained on different QA datasets are evaluated on four single-sentence tasks in a feature-based approach: SRL, SDP, NER and RE.", "p-Q UASE further pre-trained on different QA datasets is evaluated on one task (TE) in a fine-tuning approach.", "In Table 3, we find that the best options are quite different across different target tasks, which is expected because a task usually benefits more from a more similar QA dataset.", "However, we also find that QAMR is generally a good further-pre-training choice for QUASE.", "This is consistent with our intuition: First, QAMR has a simpler concept class than other paragraph-level QA datasets, such as TriviaQA, NewsQA and SQuAD.", "It is easier for QUASE to learn a good representation with QAMR to help sentence-level tasks.", "Second, QAMR is more general than other sentence-level QA datasets, such as QA-RE and Large QA-SRL.", "7 Therefore, we think that the capability to identify predicate-argument structures can generally help many sentence-level tasks, as we discuss next.", "Here we compare QUASEQAMR with BERT on 5 single-sentence tasks and 2 paired-sentence tasks, where QUASEQAMR is further pre-trained on the training set ( 51 K QA pairs) of the QAMR dataset.", "As shown in Table 4, we find that QUASEQAMR 7 Although the average performance of QUASEQAMR on five tasks is slightly below QUASE Large QA SRL , for which the benefit mostly comes from SRL.", "QUASE is mainly designed to improve a lot of tasks, so QAMR is a better choice in our setup, but in practice, we do not limit QUASE to any specific QA dataset and one can use the best one for corresponding target tasks.", "has a better performance than BERT on both single-sentence tasks and paired-sentence tasks, especially in the low-resource setting 8 , indicating that QUASEQAMR can provide extra features compared to BERT.", "Admittedly, the improvement in the Full setting is not significantly large, but we think that this is expected because large direct training data are available (such as SRL with 278 K training examples in OntoNotes).", "However, it is still promising that 51 K indirect QA pairs can improve downstream tasks in the low-resource setting (i.e. several thousands direct training examples).", "That is because they help the scalability of machine learning methods, especially for some specific domains or some low-resource languages where direct training data do not exist in large scale.", "In this section we discuss a few issues pertaining to improving QUASE by using additional QA datasets and the comparison of QUASE with related symbolic representations.", "We investigate whether adding the Large QA-SRL dataset (FitzGerald et al., 2018) or the QA-RE 9 dataset into QAMR in the further pre-training stage can help SRL and RE.", "We use s-Q UASE embeddings to replace BERT embeddings instead of concatenating the two embeddings.", "The effectiveness of adding existing resources (Large QA-SRL or QA-RE) into QAMR in the further pre-training stage of s-Q UASE on SRL and RE are shown in Table 5.", "We find that adding related QA signals (Large QA-SRL for SRL and QA-RE for RE) into QAMR can help improve specific tasks.", "Noteworthy is the fact that QA-RE can help SRL (Large QA-SRL can also help RE), though the improvement is minor compared to Large QA-SRL (QA-RE).", "These results indicate that adding more QA signals related to the sentence can help get a better sentence representation in general.", "8 Another interesting finding is that simple models usually benefit more from QUASE embeddings than SOTA models.", "9 Because the training set of QA-RE is too large, we randomly choose 100 , 000 training examples.", "Moreover, inducing these representations requires costly annotation by experts.", "Proposals such as QA-SRL, QAMR, semantic proto-roles (Reisinger et al., 2015), and universal dependencies (White et al., 2016) avoid some of these issues by using natural language annotations, but it is unclear how other tasks can take advantage of them.", "QUASE is proposed to facilitate inducing distributed representations instead of symbolic representations from QA signals; it benefits from cheaper annotation and flexibility, and can also be easily used in downstream tasks.", "The following probing analysis, based on the Xinhua subset in the AMR dataset, shows that s-Q UASEQAMR embeddings encode more semantics related to AMR than BERT embeddings.", "Specifically, we use the same edge probing model as Tenney et al. (2019), and find that the probing accuracy ( 73 . 59 ) of s-Q UASEQAMR embeddings is higher than that ( 71 . 58 ) of BERT.", "At the same time, we find that p-Q UASEQAMR can achieve 76 .", "91 F1 on the PTB set of QA-SRL, indicating that p-Q UASEQAMR can capture enough information related to SRL to have a good zero-shot SRL performance.", "More details can be found in Appendix C.1.", "Another fact worth noting is that AMR can be used to improve downstream tasks, such as MRC (Sachan and Xing, 2016), TE (Lien and Kouylekov, 2015), RE (Garg et al., 2019) and SRL (Song et al., 2018).", "The benefits of QUASEQAMR on downstream tasks show that we can take advantage of AMR by learning from much cheaper QA signals dedicated to it.", "QUASE is designed to learn distributed representations from QA signals to help down-stream tasks.", "We further show the difficulties of learning two types of corresponding symbolic representations from QA signals, which indicates that the two other possible methods are not as tractable as ours.", "One option of symbolic representation is the QAMR graph.", "Michael et al. (2017) show that question generation for QAMR representations can only achieve a precision of 28% , and a recall of 24% , even with fuzzy matching (multi-BLEU 10 > 0 . 8 ).", "Furthermore, it is still unclear how to use the complex QAMR graph in downstream tasks.", "These results indicate that learning a QAMR parser for down-stream tasks is mainly hindered by question generation, and how to use the full information of QAMR for downstream tasks is still unclear.", "Another choice of symbolic representation is AMR, since QAMR is proposed to replace AMR.", "We consider a simpler setting, learning an SRL parser from Large QA-SRL.", "We propose three models in different perspectives, but the best performance of them is only 54 .", "10 F1, even with fuzzy matching (Intersection/Union 0 . 5 ).", "More details can be found in Appendix C.2.", "Although a lot of methods (Khashabi et al., 2018; Marcheggiani and Titov, 2017; Strubell et al., 2018) can be adopted to use SRL/AMR in downstream tasks, the difficulty of learning a good SRL/AMR parser from QA signals hinders this direction.", "The difficulties of learning the two types of symbolic representations from QA signals indicate that our proposal of learning distributed representations from QA signals is a better way of making use of the latent semantic information in QA pairs for down-stream tasks.", "In this paper, we investigate an important problem in NLP: Can we make use of low-cost signals, such as QA data, to help related tasks?", "We retrieve signals from sentence-level QA pairs to help NLP tasks via two types of sentence encoding approaches.", "For tasks with a single-sentence input, such as SRL and NER, we propose s-Q UASE that provides latent sentence-level representations; for tasks with a sentence pair input, such as TE and MRC we propose p-Q UASE, that generates latent 10 An average of BLEU1BLEU4 scores.", "representations related to attentions.", "Experiments on a wide range of tasks show that the distinction of s-Q UASE and p-Q UASE is highly effective, and QUASEQAMR has the potential to improve on many tasks, especially in the low-resource setting.", "This material is based upon work supported by the US Defense Advanced Research Projects Agency (DARPA) under contracts FA8750-19-2-0201, W911NF-15-1-0461, and FA8750-19-2-1004, a grant from the Army Research Office (ARO), and Google Cloud.", "The views expressed are those of the authors and do not reflect the of-ficial policy or position of the Department of Defense or the U.S. Government." ]
[ "abstain", "method", "result", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "other", "other", "objective", "method", "other", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "other", "other" ]
[ "Visual question answering (Visual QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve.", "In this paper, we study a crucial component of this task: how can we design good datasets for the task?", "We focus on the design of multiple-choice based datasets where the learner has to select the right answer from a set of candidate ones including the target (i.e. the correct one) and the decoys (i.e. the incorrect ones).", "Through careful analysis of the results attained by state-of-the-art learning models and human annotators on existing datasets, we show that the design of the decoy answers has a sig-nificant impact on how and what the learning models learn from the datasets.", "In particular, the resulting learner can ignore the visual information, the question, or both while still doing well on the task.", "Inspired by this, we propose automatic procedures to remedy such design deficiencies.", "We apply the procedures to re-construct decoy answers for two popular Visual QA datasets as well as to create a new Visual QA dataset from the Visual Genome project, resulting in the largest dataset for this task.", "Extensive empirical studies show that the design deficiencies have been alleviated in the remedied datasets and the performance on them is likely a more faithful indicator of the difference among learning models.", "The datasets are released and publicly available via http://www.teds.", "usc.edu/website_vqa/ .", "Multimodal information processing tasks such as image captioning (Farhadi et al., 2010; Ordonez et al., 2011; Xu et al., 2015) and visual question answering (Visual QA) (Antol et al., 2015) have Equal contributions", "gained a lot of attention recently.", "A number of sig-nificant advances in learning algorithms have been made, along with the development of nearly two dozens of datasets in this very active research do-main.", "Among those datasets, popular ones include MSCOCO (Lin et al., 2014; Chen et al., 2015), Visual Genome (Krishna et al., 2017), VQA (Antol et al., 2015), and several others.", "The overarching objective is that a learning machine needs to go beyond understanding different modalities of information separately (such as image recognition alone) and to learn how to correlate them in order to perform well on those tasks.", "To evaluate the progress on those complex and more AI-like tasks is however a challenging topic.", "For tasks involving language generation, developing an automatic evaluation metric is itself an open problem (Anderson et al., 2016; Kilickaya et al., 2017; Liu et al., 2016; Kafle and Kanan, 2017b).", "Thus, many efforts have concentrated on tasks such as multiple-choice Visual QA (Antol et al., 2015; Zhu et al., 2016; Jabri et al., 2016) or selecting the best caption (Hodosh et al., 2013; Hodosh and Hockenmaier, 2016; Ding et al., 2016; Lin and Parikh, 2016), where the selection accuracy is a natural evaluation metric.", "In this paper, we study how to design high-quality multiple choices for the Visual QA task.", "In this task, the machine (or the human annotator) is presented with an image, a question and a list of candidate answers.", "The goal is to select the correct answer through a consistent understanding of the image, the question and each of the candidate answers.", "As in any multiple-choice based tests (such as GRE), designing what should be presented as negative answers we refer them as decoys is as important as deciding the questions to ask.", "We all have had the experience of exploiting the elimination strategy: This question is easy none of the three answers could be right so the remaining one must be correct!", "While a clever strategy for taking exams, such shortcuts prevent us from studying faithfully how different learning algorithms comprehend the meanings in images and languages (e.g., the quality of the embeddings of both images and languages in a semantic space).", "It has been noted that machines can achieve very high accuracies of selecting the correct answer without the visual input (i.e., the image), the question, or both (Jabri et al., 2016; Antol et al., 2015).", "Clearly, the learning algorithms have overfit on incidental statistics in the datasets.", "For instance, if the decoy answers have rarely been used as the correct answers (to any questions), then the machine can rule out a decoy answer with a binary classifier that determines whether the answers are in the set of the correct answers note that this classifier does not need to examine the image and it just needs to memorize the list of the correct answers in the training dataset.", "See Fig. 1 for an example, and Sect.", "3 for more and detailed analysis.", "We focus on minimizing the impacts of exploiting such shortcuts.", "We suggest a set of principles for creating decoy answers.", "In light of the amount of human efforts in curating existing datasets for the Visual QA task, we propose two procedures that revise those datasets such that the decoy answers are better designed.", "In contrast to some earlier works, the procedures are fully automatic and do not incur additional human annotator efforts.", "We apply the procedures to revise both Visual7W (Zhu et al., 2016) and VQA (Antol et al., 2015).", "Additionally, we create new multiple-choice based datasets from COCOQA (Ren et al., 2015) and the recently released VQA2 (Goyal et al., 2017) and Visual Genome datasets (Krishna et al., 2017).", "The one based on Visual Genome becomes the largest multiple-choice dataset for the Visual QA task, with more than one million image-question-candidate answers triplets.", "We conduct extensive empirical and human studies to demonstrate the effectiveness of our procedures in creating high-quality datasets for the Visual QA task.", "In particular, we show that machines need to use all three information (image, questions and answers) to perform well any missing information induces a large drop in performance.", "Furthermore, we show that humans dominate machines in the task.", "However, given the revised datasets are likely reflecting the true gap between the human and the machine understanding of multimodal information, we expect that advances in learning algorithms likely focus more on the task itself instead of overfitting to the idiosyncrasies in the datasets.", "The rest of the paper is organized as follows.", "In Sect.", "2, we describe related work.", "In Sect.", "3, we analyze and discuss the design deficiencies in existing datasets.", "In Sect.", "4, we describe our automatic procedures for remedying those deficiencies.", "In Sect.", "5 we conduct experiments and analysis.", "We conclude the paper in Sect.", "6.", "Wu et al. (2017) and Kafle and Kanan (2017b) provide recent overviews of the status quo of the Visual QA task.", "There are about two dozens of datasets for the task.", "Most of them use real-world images, while some are based on synthetic ones.", "Usually, for each image, multiple questions and their corresponding answers are generated.", "This can be achieved either by human annotators, or with an automatic procedure that uses captions or question templates and detailed image annota-432 tions.", "We concentrate on 3 datasets: VQA (Antol et al., 2015), Visual7W (Zhu et al., 2016), and Visual Genome (Krishna et al., 2017).", "All of them use images from MSCOCO (Lin et al., 2014).", "Besides the pairs of questions and correct answers, VQA, Visual7W, and visual Madlibs (Yu et al., 2015) provide decoy answers for each pair so that the task can be evaluated in multiple-choice selection accuracy.", "What decoy answers to use is the focus of our work.", "In VQA, the decoys consist of human-generated plausible answers as well as high-frequency and random answers from the datasets.", "In Visual7W, the decoys are all human-generated plausible ones.", "Note that, humans generate those decoys by only looking at the questions and the correct answers but not the images .", "Thus, the decoys might be unrelated to the corresponding images.", "A learning algorithm can potentially examine the image alone and be able to identify the correct answer.", "In visual Madlibs, the questions are generated with a limited set of question templates and the detailed annotations (e.g., objects) of the images.", "Thus, similarly, a learning model can examine the image alone and deduce the correct answer.", "We propose automatic procedures to revise VQA and Visual7W (and to create new datasets based on COCOQA (Ren et al., 2015), VQA2 (Goyal et al., 2017), and Visual Genome) such that the decoy generation is carefully orchestrated to prevent learning algorithms from exploiting the shortcuts in the datasets by overfitting on incidental statistics.", "In particular, our design goal is that a learning machine needs to understand all the 3 components of an image-question-candidate answers triplet in order to make the right choice ignoring either one or two components will result in drastic degradation in performance.", "Our work is inspired by the experiments in (Jabri et al., 2016) where they observe that machines without looking at images or questions can still perform well on the Visual QA task.", "Others have also reported similar issues (Goyal et al., 2017; Zhang et al., 2016; Johnson et al., 2017; Agrawal et al., 2016; Kafle and Kanan, 2017a; Agrawal et al., 2018), though not in the multiple-choice setting.", "Our work extends theirs by providing more detailed analysis as well as automatic procedures to remedy those design deficiencies.", "Besides Visual QA, VisDial (Das et al., 2017) and Ding et al. (2016) also propose automatic ways to generate decoys for the tasks of multiple-choice visual captioning and dialog, respectively.", "Recently, Lin and Parikh (2017) study active learning for Visual QA: i.e., how to select informative image-question pairs (for acquiring annotations) or image-question-answer triplets for machines to learn from.", "On the other hand, our work further focuses on designing better datasets for evaluating a machine.", "In this section, we examine in detail the dataset Visual7W (Zhu et al., 2016), a popular choice for the Visual QA task.", "We demonstrate how the deficiencies in designing decoy questions impact the performance of learning algorithms.", "In multiple-choice Visual QA datasets, a training or test example is a triplet that consists of an image I, a question Q, and a candidate answer set A. The set A contains a target T (the correct answer) and K decoys (incorrect answers) denoted by D. An IQA triplet is thus { I , Q , A = { T , D 1 , , DK }} .", "We use C to denote either the target or a decoy.", "We investigate how well a learning algorithm can perform when supplied with different modalities of information.", "We concentrate on the one hidden-layer MLP model proposed in (Jabri et al., 2016), which has achieved state-of-the-art results on the dataset Visual7W.", "The model computes a scoring function f ( c, i ) f ( c, i ) = ( U max(0 , W g ( c, i )) + b ) (1) over a candidate answer c and the multimodal information i , where g is the joint feature of ( c, i ) and ( x ) = 1 / (1 + exp( x )) .", "The information i can be null, the image (I) alone, the question (Q) alone, or the combination of both (I+Q).", "Given an IQA triplet, we use the penultimate layer of ResNet-200 (He et al., 2016) as visual features to represent I and the average WORD 2 VEC embeddings (Mikolov et al., 2013) as text features to represent Q and C. To form the joint feature g ( c, i ) , we just concatenate the features together.", "The candidate c A that has the highest f ( c, i ) score in prediction is selected as the model output.", "We use the standard training, validation, and test splits of Visual7W, where each contains 69,817, 28,020, and 42,031 examples respectively.", "Each question has 4 candidate answers.", "The parameters of f ( c, i ) are learned by minimizing the binary logistic loss of predicting whether or not a candidate c is the target of an IQA triplet.", "Details are in Sect.", "5 and the Supplementary Material.", "Machines find shortcuts Table 1 summarizes the performance of the learning models, together with the human studies we performed on a subset of 1,000 triplets (c.f. Sect. 5 for details).", "There are a few interesting observations.", "First, in the row of A where only the candidate answers (and whether they are right or wrong) are used to train a learning model, the model performs significantly better than random guessing and humans (52.9% vs. 25%) humans will deem each of the answers equally likely without looking at both the image and the question!", "Note that in this case, the information i in eq.", "(1) contains nothing.", "The model learns the specific statistics of the candidate answers in the dataset and exploits those.", "Adding the information about the image (i.e., the row of I+A), the machine improves significantly and gets close to the performance when all information is used (62.4% vs. 65.7%).", "There is a weaker correlation between the question and the answers as Q+A improves over A only modestly.", "This is expected.", "In the Visual7W dataset, the decoys are generated by human annotators as plausible answers to the questions without being shown the images thus, many decoy answers do not have visual groundings.", "For instance, a question of what animal is running? elicits equally likely answers such as dog, tiger, lion, or cat, while an image of a dog running in the park will immediately rule out all 3 but the dog, see Fig. 1 for a similar example.", "Thus, the performance of I+A implies that many IQA triplets can be solved by object, attribute or concept detection on the image, without understanding the questions.", "This is indeed the case also for humans humans can achieve 75.3% by considering I+A and not Q.", "Note that the difference between machine and human on I+A are likely due to their difference in understanding visual information.", "Note that human improves significantly from I+A to I+Q+A with Q added, while the machine does so only marginally.", "The difference can be attributed to the difference in understanding the question and correlating with the answers between the two.", "Since each image corresponds to multiple questions or have multiple objects, solely relying on the image itself will not work well in principle.", "Such difference clearly indicates that in the Visual QA model, the language component is weak as the model cannot fully exploit the information in Q, making a smaller relative improvement 5.3% (from 62.4% to 65.7%) where humans improved relatively 17.4%.", "As explained above, the decoys are drawn from all plausible answers to a question, irrespective of whether they are visually grounded or not.", "We have also discovered that the targets (i.e., correct answers) are infrequently used as decoys.", "Specifically, among the 69,817 training samples, there are 19,503 unique correct answers and each one of them is used about 3.6 times as correct answers to a question.", "However, among all the 69 , 817 3 210 K decoys, each correct answer appears 7.2 times on average, far below a chance level of 10.7 times ( 210 K 19 , 503 10 . 7 ).", "This disparity exists in the test samples too.", "Consequently, the following rule, computing each an-swer's likelihood of being correct, P ( correct | C ) = ( 0 .", "5 , if C is never seen in training, # times C as target # times C as target +( # times C as decoys ) /K , otherwise, (2) should perform well.", "Essentially, it measures how unbiased C is used as the target and the decoys.", "Indeed, it attains an accuracy of 48.73% on the test data, far better than the random guess and is close to the learning model using the answers' information only (the A row in Table 1).", "Good rules for designing decoys Based on our analysis, we summarize the following guidance rules to design decoys: (1) Question only Unresolvable (QoU) .", "The decoys need to be equally 434 plausible to the question.", "Otherwise, machines can rely on the correlation between the question and candidate answers to tell the target from decoys, even without the images.", "Note that this is a principle that is being followed by most datasets.", "(2) Neutrality .", "The decoys answers should be equally likely used as the correct answers.", "(3) Image only Unresolvable (IoU) .", "The decoys need to be plausible to the image.", "That is, they should appear in the image, or there exist questions so that the decoys can be treated as targets to the image.", "Otherwise, Visual QA can be resolved by objects, attributes, or concepts detection in images, even without the questions.", "Ideally, each decoy in an IQA triplet should meet the three principles.", "Neutrality is comparably easier to achieve by reusing terms in the whole set of targets as decoys.", "On the contrary, a decoy may hardly meet QoU and IoU simultaneously 1 .", "However, as long as all decoys of an IQA triplet meet Neutrality and some meet QoU and others meet IoU , the triplet as a whole still achieves the three principles a machine ignoring either images or questions will likely perform poorly.", "In this section, we describe our approaches of remedying design deficiencies in the existing datasets for the Visual QA task.", "We introduce two automatic and widely-applicable procedures to create new decoys that can prevent learning models from exploiting incident statistics in the datasets.", "Main ideas Our procedures operate on a dataset that already contains image-question-target (IQT) triplets, i.e., we do not assume it has decoys already.", "For instance, we have used our procedures to create a multiple-choice dataset from the Visual Genome dataset which has no decoy.", "We assume that each image in the dataset is coupled with multiple QT pairs, which is the case in nearly all the existing datasets.", "Given an IQT triplet (I, Q, T), we create two sets of decoy answers.", "QoU-decoys .", "We search among all other triplets that have similar questions to Q. The targets of those triplets are then collected as the decoys for T. As the targets to similar questions are likely 1 E.g., in Fig 1, for the question What vehicle is pic-tured?, the only answer that meets both principles is train, which is the correct answer instead of being a decoy.", "plausible for the question Q, QoU-decoys likely follow the rules of Neutrality and Question only Unresolvable (QoU) .", "We compute the average WORD 2 VEC (Mikolov et al., 2013) to represent a question, and use the cosine similarity to measure the similarity between questions.", "IoU-decoys .", "We collect the targets from other triplets of the same image to be the decoys for T. The resulting decoys thus definitely follow the rules of Neutrality and Image only Unresolvable (IoU) .", "We then combine the triplet (I, Q, T) with QoU-decoys and IoU-decoys to form an IQA triplet as a training or test sample.", "Resolving ambiguous decoys One potential drawback of automatically selected decoys is that they may be semantically similar, ambiguous, or rephrased terms to the target (Zhu et al., 2016).", "We utilize two filtering steps to alleviate it.", "First, we perform string matching between a decoy and the target, deleting those decoys that contain or are covered by the target (e.g., daytime vs. during the daytime and ponytail vs. pony tail).", "Secondly, we utilize the WordNet hierarchy and the Wu-Palmer (WUP) score (Wu and Palmer, 1994) to eliminate semantically similar decoys.", "The WUP score measures how similar two word senses are (in the range of [0 , 1] ), based on the depth of them in the taxonomy and that of their least common subsumer.", "We compute the similarity of two strings according to the WUP scores in a similar manner to (Malinowski and Fritz, 2014), in which the WUP score is used to evaluate Visual QA performance.", "We eliminate decoys that have higher WUP-based similarity to the target.", "We use the NLTK toolkit (Bird et al., 2009) to compute the similarity.", "See the Supplementary Material for more details.", "Other details For QoU-decoys, we sort and keep for each triplet the top N (e.g., 10,000) similar triplets from the entire dataset according to the question similarity.", "Then for each triplet, we compute the WUP-based similarity of each potential decoy to the target successively, and accept those with similarity below 0.9 until we have K decoys.", "We choose 0.9 according to (Malinowski and Fritz, 2014).", "We also perform such a check among selected decoys to ensure they are not very similar to each other.", "For IoU-decoys, the potential decoys are sorted randomly.", "The WUP-based 435 similarity with a threshold of 0.9 is then applied to remove ambiguous decoys.", "Several authors have noticed the design deficiencies in the existing databases and have proposed fixes (Antol et al., 2015; Yu et al., 2015; Zhu et al., 2016; Das et al., 2017).", "No dataset has used a procedure to generate IoU-decoys.", "We empirically show that how the IoU-decoys significantly remedy the design deficiencies in the datasets.", "Several previous efforts have generated decoys that are similar in spirit to our QoU -decoys.", "Yu et al. (2015), Das et al. (2017), and Ding et al. (2016) automatically find decoys from similar questions or captions based on question templates and annotated objects, tri-grams and GLOVE embeddings (Pennington et al., 2014), and paragraph vectors (Le and Mikolov, 2014) and linguistic surface similarity, respectively.", "The later two are for different tasks from Visual QA, and only Ding et al. (2016) consider removing semantically ambiguous decoys like ours.", "Antol et al. (2015) and Zhu et al. (2016) ask humans to create decoys, given the questions and targets.", "As shown earlier, such decoys may disobey the rule of Neutrality .", "Goyal et al. (2017) augment the VQA dataset (Antol et al., 2015) (by human efforts) with additional IQT triplets to eliminate the shortcuts (lan-guage prior) in the open-ended setting.", "Their effort is complementary to ours on the multiple-choice setting.", "Note that an extended task of Visual QA, visual dialog (Das et al., 2017), also adopts the latter setting.", "We examine our automatic procedures for creating decoys on five datasets.", "Table 2 summarizes the characteristics of the three datasets we focus on.", "VQA Real (Antol et al., 2015) The dataset uses images from MSCOCO (Lin et al., 2014) under the same training/validation/testing splits to construct IQA triplets.", "Totally 614,163 IQA triplets are generated for 204,721 images.", "Each question has 18 candidate answers: in general 3 decoys are human-generated, 4 are randomly sampled, and 10 are randomly sampled frequent-occurring targets.", "As the test set does not indicate the targets, our studies focus on the training and validation sets.", "Visual7W Telling (Visual7W) (Zhu et al., 2016) The dataset uses 47,300 images from MSCOCO (Lin et al., 2014) and contains 139,868 IQA triplets.", "Each has 3 decoys generated by humans.", "Visual Genome (VG) (Krishna et al., 2017) The dataset uses 101,174 images from MSCOCO (Lin et al., 2014) and contains 1,445,322 IQT triplets.", "No decoys are provided.", "Human annotators are asked to write diverse pairs of questions and answers freely about an image or with respect to some regions of it.", "On average an image is coupled with 14 question-answer pairs.", "We divide the dataset into non-overlapping 50%/20%/30% for training/validation/testing.", "Additionally, we partition such that each portion is a superset of the corresponding one in Visual7W, respectively.", "Creating decoys We create 3 QoU-decoys and 3 IoU-decoys for every IQT triplet in each dataset, following the steps in Sect.", "4.1.", "In the cases that we cannot find 3 decoys, we include random ones from the original set of decoys for VQA and Visual7W; for other datasets, we randomly include those from the top 10 frequently-occurring targets.", "Visual QA models We utilize the MLP models mentioned in Sect.", "3 for all the experiments.", "We denote MLP-A , MLP-QA , MLP-IA , MLP-IQA as the models using A (Answers only), Q+A (Question plus Answers), I+A (Image plus An-swers), and I+Q+A (Image, Question and Answers) for multimodal information, respectively.", "The hidden-layer has 8,192 neurons.", "We use a 200-layer ResNet (He et al., 2016) to compute visual features which are 2,048-dimensional.", "The ResNet is pre-trained on ImageNet (Russakovsky et al., 2015).", "The WORD 2 VEC feature (Mikolov et al., 2013) for questions and answers are 300-dimensional, pre-trained on Google News 2 .", "The 2 We experiment on using different features in the Supplementary Material.", "parameters of the MLP models are learned by minimizing the binary logistic loss of predicting whether or not a candidate answer is the target of the corresponding IQA triplet.", "Please see the Supplementary Material for details on optimization.", "We further experiment with a variant of the spatial memory network (denoted as Attention ) (Xu and Saenko, 2016) and the HieCoAtt model (Lu et al., 2016) adjusted for the multiple-choice setting.", "Both models utilize the attention mechanism.", "Details are listed in the Supplementary Material.", "Evaluation metric For VQA and VQA2, we follow their protocols by comparing the picked answer to 10 human-generated targets.", "The accuracy is computed based on the number of exactly matched targets (divided by 3 and clipped at 1).", "For others, we compute the accuracy of picking the target from multiple choices.", "Decoy sets to compare For each dataset, we derive several variants: (1) Orig : the original decoys from the datasets, (2) QoU : Orig replaced with ones selected by our QoU-decoys generating procedure, (3) IoU : Orig replaced with ones selected by our IoU-decoys generating procedure, (4) QoU +IoU : Orig replaced with ones combining QoU and IoU , (5) All : combining Orig , QoU , and IoU .", "User studies Automatic decoy generation may lead to ambiguous decoys as mentioned in Sect.", "4 and (Zhu et al., 2016).", "We conduct a user study via Amazon Mechanic Turk (AMT) to test humans' performance on the datasets after they are remedied by our automatic procedures.", "We select 1,000 IQA triplets from each dataset.", "Each triplet is answered by three workers and in total 169 workers get involved.", "The total cost is $215 the rate for every 20 triplets is $0.25.", "We report the average human performance and compare it to the learning models'.", "See the Supplementary Material for more details.", "The performances of learning models and humans on the 3 datasets are reported in Table 3, 4, and 5 3 .", "3 We note that in Table 3, the 4.3% drop of the human performance on IoU +QoU, compared to Orig, is likely due to that IoU +QoU has more candidates (7 per question).", "Besides, the human performance on qaVG cannot be directly compared to that on the other datasets, since the questions on qaVG tend to focus on local image regions and are considered harder.", "Effectiveness of new decoys A better set of decoys will force learning models to integrate all 3 pieces of information images, questions and answers to make the correct selection from multiple-choices.", "In particular, they should prevent learning algorithms from exploiting shortcuts such that partial information is sufficient for performing well on the Visual QA task.", "Table 3 clearly indicates that those goals have been achieved.", "With the Orig decoys, the relatively small gain from MLP-IA to MLP-IQA suggests that the question information can be ignored to attain good performance.", "However, with the IoU-decoys which require questions to help to resolve (as image itself is inadequate to resolve), the gain is substantial (from 27.3% to 84.1%).", "Likewise, with the QoU-decoys (question itself is not adequate to resolve), including images information improves from 40.7% (MLP-QA) substantially to 57.6% (MLP-IQA).", "Note that with the Orig decoys, this gain is smaller (58.2% vs. 65.7%).", "It is expected that MLP-IA matches better QoU-decoys but not IoU-decoys, and MLP-QA is the other way around.", "Thus it is natural to combine these two decoys.", "What is particularly appealing is that MLP-IQA improves noticeably over models learned with partial information on the combined IoU +QoU-decoys (and All decoys 4 ).", "Furthermore, using answer information only (MLP-A) attains about the chance-level accuracy.", "On the VQA dataset (Table 4), the same observations hold, though to a lesser degree.", "On any of the IoU or QoU columns, we observe substan-4 We note that the decoys in Orig are not trivial, which can be seen from the gap between All and IoU +QoU.", "Our main concern on Orig is that for those questions that machines can accurately answer, they mostly rely on only partial information.", "This will thus hinder designing machines to fully comprehend and reason from multimodal information.", "We further experiment on random decoys, which can achieve Neutrality but not the other two principles, to demonstrate the effectiveness of our methods in the Supplementary Material.", "tial gains when the complementary information is added to the model (such as MLP-IA to MLP-IQA).", "All these improvements are much more visible than those observed on the original decoy sets.", "Combining both Table 3 and 4, we notice that the improvements from MLP-QA to MLP-IQA tend to be lower when facing IoU-decoys.", "This is also expected as it is difficult to have decoys that are simultaneously both IoU and QoU such answers tend to be the target answers.", "Nonetheless, we deem this as a future direction to explore.", "Differences across datasets Contrasting Visual7W to VQA (on the column IoU +QoU), we notice that Visual7W tends to have bigger improvements in general.", "This is due to the fact that VQA has many questions with Yes or No as the targets the only valid decoy to the target Yes is No, and vice versa.", "As such decoys are already captured by Orig of VQA (Yes and No are both top frequently-occurring targets), adding other decoy answers will not make any noticeable improvement.", "In Supplementary Material, however, we show that once we remove such ques-tions/answers pairs, the degree of improvements increases substantially.", "Comparison on Visual QA models As presented in Table 3 and 4, MLP-IQA is on par with or even outperforms Attention and HieCoAtt on the Orig decoys, showing how the shortcuts make Datasets Decoys Best w/o qaVG model using qaVG initial fine-tuned Visual7W Orig 65.7 60.5 69.1 IoU +QoU 52.0 58.1 58.7 All 45.1 48.9 51.0 VQA Orig 64.6 42.2 65.6 IoU +QoU 63.7 47.9 64.1 All 58.9 37.5 59.4 Table 6: Using models trained on qaVG to improve Visual7W and VQA (Accuracy in %).", "it difficult to compare different models.", "By eliminating the shortcuts (i.e., on the combined IoU +QoU-decoys), the advantage of using sophisticated models becomes obvious ( Attention outperforms MLP-IQA by 3% in Table 4), indicating the importance to design advanced models for achieving human-level performance on Visual QA.", "For completeness, we include the results on the Visual Genome dataset in Table 5.", "This dataset has no Orig decoys, and we have created a multiple-choice based dataset qaVG from it for the task it has over 1 million triplets, the largest dataset on this task to our knowledge.", "On the combined IoU +QoU-decoys, we again clearly see that machines need to use all the information to succeed.", "With qaVG , we also investigate whether it can help improve the multiple-choice performances on the other two datasets.", "We use the MLP-IQA trained on qaVG with both IoU and QoU decoys to initialize the models for the Visual7W and VQA datasets.", "We report the accuracies before and after fine-tuning, together with the best results learned solely on those two datasets.", "As shown in Table 6, fine-tuning largely improves the performance, justifying the finding by Fukui et al. (2016).", "In Fig. 2, we present examples of image-question-target triplets from V7W , VQA , and VG , together with our IoU-decoys (A, B, C) and QoU-decoys (D, E, F).", "G is the target.", "The predictions by the corresponding MLP-IQA are also included.", "Ignoring information from images or questions makes it extremely challenging to answer the triplet correctly, even for humans.", "Our automatic procedures do fail at some triplets, resulting in ambiguous decoys to the targets.", "See Fig. 3 for examples.", "We categorized those failure cases into two situations.", "lies on the WordNet hierarchy.", "For some semantically similar words like lady and woman, the similarity is only 0.632, much lower than that of 0.857 between cat and dog.", "This issue can be alleviated by considering alternative semantic measures by WORD 2 VEC or by those used in (Das et al., 2017; Ding et al., 2016) for searching similar questions.", "The question is ambiguous to answer.", "In the bottom example in Fig. 3, both candidates D and F seem valid as a target.", "Another representative case is when asked about the background of a image.", "In images that contain sky and mountains in the distance, both terms can be valid.", "We perform detailed analysis on existing datasets for multiple-choice Visual QA.", "We found that the design of decoys can inadvertently provide shortcuts for machines to exploit to perform well on the task.", "We describe several principles of constructing good decoys and propose automatic procedures to remedy existing datasets and create new ones.", "We conduct extensive empirical studies to demonstrate the effectiveness of our methods in creating better Visual QA datasets.", "The remedied datasets and the newly created ones are released and available at http://www.teds.", "usc.edu/website_vqa/ .", "This work is partially supported by USC Graduate Fellowship, NSF IIS-1065243, 1451412, 1513966/1632803, 1208500, CCF-1139148, a Google Research Award, an Alfred.", "P. Sloan Research Fellowship and ARO# W911NF-12-1-0241 and W911NF-15-1-0484." ]
[ "abstain", "method", "method", "result", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "method", "abstain", "abstain", "objective", "result", "result", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "other", "objective", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "objective", "other", "abstain", "other", "other" ]
[ "Modern neural language models can produce remarkably fluent and grammatical text.", "So much, in fact, that recent work by Clark et al. (2021) has reported that conventional crowdsourcing can no longer reliably distinguish between machine-authored (GPT-3) and human-authored writing.", "As errors in machine generations become ever subtler and harder to spot, it poses a new challenge to the research community for robust machine text evaluation.", "We propose a new framework called SCARECROW for scrutinizing machine text via crowd annotation.", "To support the broad range of real machine errors that can be identified by laypeo-ple, the ten error categories of SCARECROW such as redundancy , commonsense errors , and incoherence are identified through several rounds of crowd annotation experiments without a predefined ontology.", "We then use SCARECROW to collect over 41k error spans in human-written and machine-generated paragraphs of English language news text.", "We isolate factors for detailed analysis, including parameter count, training data, and various decoding-time configurations.", "Our approach successfully quantifies measurable gaps between human authored text and generations from models of several sizes, including fourteen configurations of GPT-3.", "In addition, our analysis unveils new insights, with detailed rationales provided by laypeople,", "e.g., that the commonsense capabilities have been improving with larger models while math capabilities have not, and that the choices of simple decoding hyperparameters can make remarkable differences on the perceived quality of machine text.", "We release our training material, annotation toolkit and dataset at https: //yao-dou.github.io/scarecrow/ .", "Clark et al. (2021) demonstrated the challenges of human evaluation in the era of GPT-3 (Brown Equal contribution", "According to the Financial Times, Apple's been talking to \"a small group of contract manufacturers to explore making an electric vehicle,\" which would ostensibly be an autonomous car.", "All this does sound like the loose ends of Apple's CarPlay rollout: hiring 1,200 engineers for the iOS team, building the CarPlay-specific testing track, developing a Lincoln Navigator, then poaching Burberry's head of product design to lead the integration of software and hardware.", "WWDC 2015 We know what you're thinking: Another Monday?", "et al., 2020), as crowd workers are no longer able to reliably distinguish GPT-3's generations from human-written text.", "Or are they?", "In this paper, we propose a new framework for systematically scrutinizing machine text so that even crowd workers, despite the known challenges reported by recent literature, can successfully critique seemingly fluent generations.", "We not only quantify a measurable gap between ma-1 7250 ERROR TYPE DEFINITION EXAMPLE Language Errors Grammar and Usage Missing, extra, incorrect, or out of order words ...explaining how cats feel emoticons ...", "To achieve this, we develop SCARECROW , a methodology for eliciting categorical judgements of errors in machine-generated text from crowd workers.", "One goal in natural language generation (NLG) is to produce fluent outputs which can be read by laypeople.", "As such, we propose that important errors to address are those which are recognized by readers without NLP expertise.", "Our framework allows crowd workers to annotate problems in model outputs at the span level.", "A single such annotation is shown in Figure 1. To make this possible, we establish a categorization of shortcomings commonly found in machine generated text (Table 1).", "This error schema covers a broad scope of problems as identified by experts, but has been honed according to what is salient to non-expert readers through several pilot rounds of crowd annotation without a fixed label set.", "The result is a framework that is usable by everyday people with minimal training, but covers the error phenomena found in real machine-generated text.", "Labeling spans of text using specific error types creates a picture of contemporary model generations with an unprecedented level of detail.", "In contrast to judging text holistically (Celikyilmaz et al., 2021), insights from this method are specific and practical, as it measures exactly how and where problems arise.", "We conduct a large-scale analysis of human-written and machine-generated text using SCARECROW , collecting 13k annotations of 1.3k paragraphs, amassing 41k spans labeled with error type, severity, and an explanation.", "Through this, we characterize in which ways GPT-3's generations are better than those of previous models, and which aspects do not improve with increased data and parameters.", "We also provide a rigorous error analysis of text generated by several other contemporary language models, examining the impact of model size, training data, and decoding strategy.", "We provide our detailed annotator training system and task interface so that future researchers may employ and refine them for error analyses of machine-generated text.", "We hope this will contribute to the standardization of NLG human evaluation (Howcroft et al., 2020).", "We perform a large-scale annotation of errors in English news text generated by five sources (four", "models and ground truth articles).", "We present Figures 2, 3, and 4 as summaries of our main results.", "As a reminder to readers, Grover (Zellers et al., 2019) is the same model size and architecture as GPT-2 XL (Radford et al., 2019), but trained in-domain (on news text).", "As such, our results cover three increasing model sizes (GPT-2 Small, XL, and GPT-3 (Brown et al., 2020)), one change in domain (Grover), and ground-truth text (Human).", "For GPT-3, we also study a variety of decoding configurations (Figure 4).", "The main quantity we measure (on y -axes) is span coverage , which is the average portion of tokens that ends up covered by annotations of a particular error type.", "Since it is possible that multiple spans nest or overlap, there is no upper bound for this quantity.", "(See Figure 12 for a comparison of span coverage with other measurement alterna-tives.)", "Figure 2 measures span coverage for each type of span separately, Figure 3 stacks them, and Figure 4 removes non-error spans (reader issues) before adding them (as in Figure 3, but without showing the individual types).", "The following are our key findings.", "1. Scaling pays off to improve Encyclopedic , Commonsense , and Incoherent errors (Fig. 2).", "These error categories decrease with in-domain training (Grover) and larger model size (GPT-3).", "The Needs Google and Technical Jargon span categories both have a humans highest trend, and both fall under reader issues : problems that are not necessarily errors , but that still prevent full comprehension 3 7252 GPT-2 S GPT-2 XL Grover-Mega GPT-3 Human 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 S pan c o v e r age Average Span Coverage Across Models Bad_Math Commonsense Encyclopedic Grammar_Usage Incoherent Needs_Google Off-prompt Redundant Self-contradiction Technical_Jargon Figure 3: Average portion of tokens covered by span annotations, broken down by error type.", "2. Scaling benefits plateau for Off-Prompt , Bad Math , and Grammar and Usage errors (Fig. 2).", "These three error categories see a model plateau in error reduction when scaling to GPT-3.", "Of these error types, humans still commit fewer Off-Prompt (more: E.1) and Grammar and Usage errors, but Bad Math appears saturated for our domain.", "3. Self-Contradiction and Redundant errors exhibit more complex scaling behavior (Fig. 2).", "We roughly categorize these trends as rising and falling : increasing for medium or large-scale models, but dropping for human-authored text.", "Text generated by GPT-2 Small is so often incoherent that there is little possibility for Self-Contradiction (more: E.2), and the increase in Redundant errors varies based on how errors are counted (more: E.3).", "4. Human-authored text produces the most reader issues (Figs. 2 and 3).", "or factual verification of the text (more: E.4).", "Furthermore, human-authored text is not free from error annotations (Figure 3).", "This can serve either as a control for baseline error rates (more: E.6), or as a mechanism for critiquing human writing.", "impact (Figure 4).", "For the previous findings, we fix the sampling configuration for all models to an apples-to-apples setup for fair comparison: topp = 0.96, (softmax) temperature = 1, and no frequency penalty (i.e., word repetition penalty; defined precisely in 5.2, Equation 1).", "To study the effects of these decoding settings, we annotate text generated by GPT-3 using a variety of values for topp and temperature, both with and without a frequency penalty.", "To our surprise, the decoding hyperparameters considerably affected error rates (more: E.5).", "As seen in Figure 4, the worst sampling procedure for GPT-3 (argmax sampling with no frequency penalty) performed even worse than GPT-2 XL.", "But the best sampling procedure (surprisingly, also argmax sampling, but with a frequency penalty) produced text with as few apparent SCARECROW error spans as those authored by humans (more: E.6).", "All of these findings are discussed in more detail in Appendix E. 3 Evaluation of Natural Language Generation We make our study in the area of open-ended natural language generation, a loose term for generating longer texts with an increased level of creative freedom.", "The common factor in all open-ended generation tasks such as story, blog, and dialog generation is the wide and diverse nature of target outputs.", "Lexically and even semantically dissimilar responses to the same prompt could be equally valid.", "For example, a model prompted with the blog title Recipes for success this Holiday season could describe how to roast a turkey or strategies for dealing with the stresses of holiday travel.", "This allowable variation poses a particular difficulty for the evaluation of generation systems.", "Traditionally, text generation quality for tasks like machine translation or graph-to-text generation has been measured by word overlap with human-authored references (Papineni et al., 2002; Lin, 2004).", "Though measures like BLEU allow for multiple references, they break down when the space of allowable outputs is large, as in open-ended generation.", "Recently introduced metrics seek to remedy this problem (Hashimoto et al., 2019; Pillutla et al., 2021), but the gold standard for evaluating generated text is still human judgment.", "However, current approaches to eliciting human 4 7253 judgement of generated text often do not provide detailed insight into where models are making progress, where they are failing, and the scope of these failures.", "A/B-style testing allows for directly comparing one system against others (Clark and Smith, 2021), but can only express relative improvements.", "Simple Likert scale judgements can assess text quality, but do not explain why a generated text receives a given rating, or which segment of the text is problematic.", "Insights into model failures often come instead from a small scale expert analysis of outputs.", "However, these error analy-ses, once a staple of NLP research, have become less common in recent years, perhaps due to their small size and high variance.", "A hypothesis of the current work is that a well designed error analysis annotation framework could be used by crowdworkers to annotate large amounts of text, thereby providing detailed information about model progress and failures as well as actionable directions for future research.", "Such a framework would be easy to learn, reusable, and independent of particular models or experimental conditions.", "In what follows, we outline the details of such a method.", "This section describes the high-level annotation methodology for SCARECROW .", "Our annotations consider two segments of text: a one-sentence prompt, and a one-paragraph generation.", "The prompt is human-written.", "It provides both starting tokens for model generation, as well as context for humans to evaluate whether a model is able to stay on-promptboth topically and factually.", "Annotators know that the prompt is written by a human.", "The generation is either text sampled from a language model, or the human-authored continuation to the prompt.", "Annotators, who do not know whether the generation came from a model or humans, assess this text.", "A paragraph length (80145 tokens) is chosen to balance expressiveness with scope.", "For expressiveness, models must be given a sufficient number of tokens to express their capabilities lexically, syntactically, and semantically.", "One paragraph allows for significantly more variation than a single sentence.", "On the other hand, assessing multiple paragraphs is challenging, both Inconsistent about how many moons Mars has.", "as a crowdsourcing task itself, and because it broadens the kinds of errors to include larger narrative scope.", "We leave extensions of SCARECROW to longer narrative lengths for future work.", "Annotators select spans that contain problems in the generation.", "The spans are automatically snapped to word boundaries.", "We choose spans to balance specificity (i.e., vs. simply commenting on the text as a whole) with ease of use (vs. imposing a more structured annotation schema).", "We instruct workers to select the smallest span minimally a single wordthat contains an issue.", "Sometimes this involves an entire phrase, sentence, 5 7254 or multiple sentences.", "We aim for specificity because during aggregation, it is possible to back off annotations to larger spans, but not the inverse.", "Once they select a span, workers (1) label the error type, (2) choose a severity level, and (3) explain their reasoning behind the error.", "Workers use the annotation interface shown in Figure 5 to mark a span with these three steps.", "We describe each step in greater detail in the next three sections.", "Each selected span is labeled with exactly one error type.", "Multiple errors may be marked with partially or fully overlapping spans in the case that one text segment contains multiple problems.", "We chose ten error types to balance three criteria: linguistic analysis, observed errors in generated text, and capabilities of everyday people with one to two hours of training.", "1 We developed the schema by starting with the first two criteria (lin-guistic analysis and observed errors), and refining it over several pilot annotation studies, with 30 crowd workers performing 750 total annotations of 60 paragraphs before beginning data collection.", "We broadly group the errors into three categories: language errors, factual errors, and reader issues.", "Language errors are issues with internal and external structure of text: which ideas are expressed, and whether they are expressed coherently and consistently.", "Factual errors denote that the information presented is known to be incorrect.", "Reader issues, on the other hand, are cases where the text is too technical or obscure to assess its factuality.", "Hence, reader issues are not errors, per se, but regions where a reader would need assistance outside of the text itself for comprehension.", "Errors naturally vary in how jarring they are to a reader.", "We define three error severity levels, and ask annotators to pick one for each error.", "The severity levels are as follows.", "(1) Almost no impact on quality; just a small problem.", "(2) Understandable, but difficult; what's written is still comprehensible, but there's clearly an issue.", "(3) Very difficult to understand; the error almost completely ruins the text.", "We provide examples of each severity in Appendix B.1.", "In this paper, we omit an analysis of the severity labels (except for an illustration in Figure 12), but include it in our data release for future work to explore.", "Finally, we ask annotators to explain their reasoning behind each error in natural language.", "We provide example explanations during training, but do not impose strict guidelines.", "This paper primarily focuses on quantitative error analysis, but we anticipate the error explanations may warrant future investigation.", "We use Amazon Mechanical Turk (AMT) for all data collection.", "Training We first pay each worker $40 to take an extensive qualification task, which both trains them in the span categorization scheme and quizzes their understanding.", "We pass workers if they score 90 points out of 100 points (details in Appendix B.2).", "Annotation Workers annotate each paragraph using a custom annotation interface (shown partially in Figure 5), for which we pay $3.50.", "We calculated $3.50 per annotation by aiming to pay workers at least $15/hour.", "After several annotation rounds, we observed considerable variation in time per annotation, 2 so this cost should not be necessarily seen as a requirement for SCARECROW annotations.", "We collect 13k human annotations of 1.3k paragraphs using SCARECROW , resulting in over 41k spans.", "We consider four model configurations to test recent state-of-the-art transformer-based (Vaswani et al., 2017) models.", "GPT-2 Small (Radford et al., 2019) The 117M parameter variant of GPT-2, which is pretrained on WebText, without additional fine-tuning.", "GPT-2 XL (Radford et al., 2019) The 1.5B parameter variant of GPT-2, (WebText, no fine-tuning).", "Grover-Mega (Zellers et al., 2019) The 1.5B parameter variant of Grover, a model with the same architecture and parameter count of GPT-2, trained on news articles and their metadata.", "GPT-3 DaVinci (Brown et al., 2020) The 175B parameter variant of GPT-3, which is trained on a version of the Common Crawl web scrape with additional filtering and deduplicating.", "We consider three main hyperparameters when sampling from models: p for top-p or nucleus sampling (Holtzman et al., 2020), an alternative to top-k ; 3 t for the softmax temperature ; and", "f.p.", "for frequency penalty .", "The frequency penalty scales a token's likelihood based on how many times it was already generated by applying the following modification to the model's output: (cid:96) i ( t ) (cid:96) i ( t ) c <i ( t ) f (1) where (cid:96) i ( t ) is the model's output for token t at the i -th position, 4 c <i ( t ) is the count of token t 's sampled occurrences prior to the i -th position, and f is the frequency penalty.", "We omit studying presence penalty , another hyperparameter offered for GPT-3, simply due to annotation budget constraints.", "To compare models as consistently as possible, we set identical decoding strategies for our primary data collection.", "We refer to this as the apples-to-apples decoding setup throughout the paper: p = 0 .", "However, we also wish to study the effects of these decoding strategies.", "We annotate generations from the strongest available model (currently, GPT-3) varying the following parameters: 3 We omit separate studies of top-k , due to results presented by Holtzman et al. (2020), and OpenAI's removal of top-k from the GPT-3 API.", "4 While (cid:96) i ( t ) is defined to be logits (un-normalized logprobabilities), because it is un-normalized, we anticipate that it is simply the model's output before the log( softmax ( )) is applied.", "See OpenAI's description of frequency and presence penalties: https://beta.openai.com/docs/ api-reference/parameter-details p { 0 .", "4 , 0 .", "7 , 0 .", "9 , 0 .", "96 } t { 0 .", "0 (argmax) , 0 .", "4 , 0 .", "7 , 1 .", "0 }", "f.p.", "{ 0 (none) , 1 (full) } For budget reasons, we only vary p and t independentlyi.e., we set p = 0 .", "96 when varying t , and t = 1 .", "0 when varying p .", "We use news articles as the sources of prompts for models to condition on for generation.", "Specifically, we use news articles found in the Common Crawl.", "We select the first sentence as the prompt.", "Our use of news text is constrained by two factors.", "First GPT-3 is trained on the Common Crawl, from 2016 through 2019.", "We wish to avoid testing GPT-3 by generating from articles it saw during training, due to the possibility of copying (Carlini et al., 2021).", "Second, news articles began heavily covering the COVID-19 pandemic beginning around February 2020.", "Though testing models' capabilities to generate text about unseen events is a valuable line of study, the distribution shift caused by COVID-19 in news writing about all aspects of life is difficult to overstate.", "As such, to make the comparison more amenable to models' training data, we consider news articles from January 2020.", "We select articles where there is a known topicsuch as Food or Sports from the Common Crawl metadata, to allow for studying any effect of coarse-grained subject.", "We generate between 80 and 145 tokens 5 from each model as a continuation to the first sentence of the news article.", "We stop generating when we heuristically detect the first sentence boundary after 80 tokens.", "If the model does not end a sentence between 80 and 145 tokens, we sample again.", "For the Human setting, we use the remainder of the article, similarly stopping after the first sentence boundary after 80 tokens.", "Crowdsourcing Workers first complete training and qualification tasks.", "We provide more details in 4.7.", "From pilot studies, we discovered that each error, depending on its severity and clarity, has only a 5 Counted by Stanza tokenization (Qi et al., 2020), not byte-pair encoding (BPE) or whitespace-separated tokens.", "low to moderate chance of being identified by each worker.", "However, most worker-identified errors were truly problems.", "In other words, annotators labeled issues with high precision and low recall.", "To account for this, we have 10 workers annotate each paragraph.", "We examine the agreement and variability of annotations in Appendix C. Dataset statistics We provide detailed dataset statistics in Appendix D. 6 Error Prediction A natural question is: using this data, can machines learn to detect and classify errors in machine generated text?", "Task We frame this problem as a span classification task.", "Given a span from a generated text, the goal is to classify its error type or output No Error if there is none.", "Positive examples for each error class are taken from our data.", "We sample random spans that were not labeled with any error type as negative examples.", "To ensure a breadth of span lengths, we sample 3 negative spans for every length of error span in the generated text.", "We split the generated texts into train, development, and test sets using 1063 texts (28029 error spans), 100 texts (2538 spans) and 100 texts (2677 spans) respectively.", "Model We use a standard span classification model inspired by Wadden et al. (2019).", "This model encodes every generated text using a pretrained language model (RoBERTa-large).", "Spans are represented with the final layer of this encoding.", "Following previous work, we concatenate the start and end tokens with a task-specific learned length embedding.", "The resulting vector is passed through a feedforward network which reduces its dimensionally to the number of error categories plus a No Error option.", "The resulting model has 357M trainable parameters.", "The model is trained to minimize the cross entropy of the correct span category.", "We train for 15 epochs using AdamW with a learning rate of 10 6 .", "We validate after each epoch and use the checkpoint with the lowest validation loss (epoch 8).", "Evaluation To evaluate the error prediction model, we use per-token precision, recall, and F 1 score per error category.", "We classify every span up to length 30 in a generated text.", "We take as gold labels the aggregated human error spans collected Error Model Human P R F 1 P R F 1 Bad Math 0 0.72 0.14 0.24 Commonsense 0.77 0.06 0.10 0.17 0.02 0.04 Encyclopedic 0 0.22 0.03 0.05 Grammar and Usage 0.29 0.23 0.26 0.30 0.04 0.08 Incoherent 0.59 0.34 0.43 0.69 0.15 0.24 Off-Prompt 0.67 0.29 0.41 0.88 0.31 0.46 Redundant 0.23 0.82 0.36 0.88 0.35 0.50 Self-Contradiction 0.08 0.23 0.12 0.51 0.09 0.16 Technical Jargon 0.18 0.74 0.29 0.61 0.12 0.20 Needs Google 0.59 0.96 0.73 0.78 0.20 0.32 Table 2: Model prediction results against combined spans of 10 annotators, compared with humans scored as one-vs-rest (i.e., 1-vs-9).", "in our data.", "In other words, models predict the combined spans of all 10 annotators.", "For comparison, we also report as Human the average metrics of one annotator versus the others (i.e., 1-vs-9).", "6 Results Table 2 shows the error prediction capability of this model in terms of precision and recall.", "As we noted earlier, a single human annotator can be thought of as a high precision, low recall judge.", "These results bear out this claim.", "For all but one category, humans have higher precision annotations.", "However, the models trained on the aggregation of human labels can achieve considerably higher recall.", "For half of the error categories, this leads to higher model F 1 scores than the human annotators.", "We see that the model is successful at identifying information that human's would have to manually verify ( Needs Google ), achieving nearly perfect recall with precision close to 0.6.", "The model can also identify Grammar and Usage , Incoherent , and Redundant errors with higher recall than an individual human annotator, though at the cost of precision (sometimes in the .20s).", "Automated evaluation metrics such as BLEU (Pa-pineni et al., 2002), ROUGE (Lin, 2004), METEOR (Banerjee and Lavie, 2005), and BERTScore", "6 The difference in available references (10 for models, 9 for humans) mean this setup makes it easier for models to score higher in precision, and for humans to score higher in recall.", "Despite this, humans still achieve higher precision, and models still achieve higher recall.", "(Zhang et al., 2019) compute a generation's score based on a (set of) reference(s).", "Their use is well-established in tasks like machine translation and summarization, but they are less helpful in open-ended text generation, where there is a vast diversity of possible high-quality continuations.", "Recent studies propose automated metrics for open-ended text generation evaluation such as: Perception Score (Gu et al., 2021), which diffuses evaluation onto a multidimensional space and assigns a single score; UNION (Guan and Huang, 2020), which learns to distinguish human-written stories from negative samples by generating perturbations of human-written stories; and MAUVE (Pillutla et al., 2021), which compares the distribution of machine-generated text to that of human language.", "An alternate recent approach to assessing open-ended text generation was presented in TuringAd-vice (Zellers et al., 2021), where crowd workers assess machine-generated advice in response to Reddit posts.", "In their error analysis, Zellers et al. connect problems in generated text to core NLP tasks, such as Self-Contradiction errors as instances of failed natural language inference (Monz and de Rijke, 2001), or Off-Prompt errors as cases of failed reading comprehension (Richardson et al., 2013).", "While past work has attempted to guide text generation using discriminative models trained for such tasks (Holtzman et al., 2018), it remains an open challenge.", "Comparative human evaluations of natural language generations ask annotators to rank system outputs relative to each other.", "Text is typically evaluated using a few global criteria, such as fluency and relevance, using discrete (e.g., 5-point) (Sai et al., 2020) or continuous scales (Novikova et al., 2018).", "Recent work even automates this approach, running a human evaluation alongside automatic metrics on leaderboard submissions (Khashabi et al., 2021).", "In the RoFT system (Dugan et al., 2020), annotators attempt to detect the boundary between humanand machine-written text as a proxy for assessing quality.", "Table 3 summarizes the differences between these schemes and SCARECROW .", "See Celikyilmaz et al. (2021) for a recent survey of text generation evaluation techniques across both human and automatic metrics.", "While these approaches may be helpful sometimes (Card et al., 2020)at ranking systems, they do not give us insight into exactly which parts of a generation fall short, and why .", "One approach Method GC SET DE RR EE RS SA Likert-Scale (cid:88) (cid:88) (cid:88) RankME (cid:88) (cid:88) (cid:88) RoFT (cid:88) (cid:88) (cid:88) SCARECROW (cid:88) (cid:88) (cid:88) (cid:88) Table 3: Comparison of different natural language generation human evaluations.", "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, 9 7258 Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc-Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei.", "related to or annotation method is pursued by Wood et al. (2018), who develop a collaborative mobile app where users draw graffiti commentary on news articles.", "SCARECROW aims to assess model generations the way we would critique human-written text: by locating, coarsely categorizing, and explaining problems.", "We present SCARECROW , a method for identifying and explaining issues in generated text.", "Along with the annotation framework, we present an analysis of the SCARECROW method applied to several large neural language models in an open-ended news generation task.", "We release our data and methodology to the community.", "The authors thank members of xlab for their feedback on this work.", "This research is supported in part by NSF (IIS-1714566), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), DARPA SemaFor program, and Allen Institute for AI." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "objective", "method", "method", "abstain", "other", "other" ]
[ "Prior work on Data-To-Text Generation, the task of converting knowledge graph (KG) triples into natural text, focused on domain-specific benchmark datasets.", "In this paper, however, we verbalize the entire English Wikidata KG, and discuss the unique challenges associated with a broad, open-domain, large-scale verbalization.", "We further show that verbalizing a comprehensive, encyclopedic KG like Wikidata can be used to integrate structured KGs and natural language corpora.", "In contrast to the many architectures that have been developed to integrate these two sources, our approach converts the KG into natural text, allowing it to be seamlessly integrated into existing language models.", "It carries the further advantages of improved factual accuracy and reduced toxicity in the resulting language model.", "We evaluate this approach by augmenting the retrieval corpus in a retrieval language model and showing significant improvements on the knowledge intensive tasks of open domain QA and the LAMA knowledge probe.", "Data-To-Text Generation (Kukich, 1983; Goldberg et al., 1994) involves converting knowledge graph (KG) triples of the form (subject, relation, object) into a natural language sentence(s).", "There are many standard datasets for this task such as WebNLG (Gardent et al., 2017) and many systems have been developed to improve performance on these datasets.", "However, to the best of our knowledge, no prior work has attempted to verbalize a full knowledge graph.", "Verbalizing a full KG has additional challenges over small benchmark datasets, such as entity and relation coverage and the lack of grouped sets of triples that can produce coherent sentences together.", "In this paper, we convert the English Wikidata KG (Vrandecic and Krtzsch, 2014) into natural language text (Figure 1).", "The Work done during internship at Google Spork EP 1995 The Shins English language Extended play Spork EP is the English language extended play by the band The Shins.", "generated corpus, which we call the KELM Corpus, consists of 18M sentences spanning 45M triples with 1500 distinct relations.", "For training the verbalization system, we also create an English Wikidata KGWikipedia Text aligned corpus consisting of a variety of entities such as dates and numerical quantities.", "We evaluate the quality of the generated corpus through human evaluation of a random sample.", "We further showcase the utility of this corpus in language model pre-training.", "Text represents a limited coverage of the world knowledge.", "Therefore, we expect the language models to be restricted to facts that are expressed in natural language.", "Moreover, facts may not be expressed as explicitly in text as they are in KGs, and the variability in the quality of text can eventually cause biases in the resulting models (Bolukbasi et al., 2016; Sheng et al., 2019; Manzini et al., 2019).", "Building models that handle structured data and free form text seamlessly has been a long sought-after goal.", "However, their integration is challenging due to different structural formats.", "KG verbalization provides a simple way to integrate KGs with natural text.", "We illustrate this by augmenting the REALM (Guu et al., 2020) retrieval corpus with the KELM Corpus.", "We evaluate Wikidata (KG) Wikipedia (text) Triple <-> Sentence Aligner Text to Text Generator T5 Finetuning 1 Training data Input : To Kill a Mockingbird author Harper Lee, publication date 11 July 1960.", "the augmented system on the LAMA knowledge probe (Petroni et al., 2019) and open domain QA and show improvements on both.", "Through ablation experiments where we augment the retrieval corpus with the raw triples instead, we further confirm the effectiveness of verbalization.", "Our contributions are as follows TEKGEN ( Te xt from KG Gen erator): A data-to-text sequence-to-sequence model for verbalizing an entire KG TEKGEN training corpus: TextKG aligned corpora with a wide variety of relations including dates and quantities KELM Corpus, 1 (Corpus for K nowledge-E nhanced L anguage M odel Pre-training): A large-scale synthetic corpus of Wikidata KG as natural text Data-to-text generation as a method to integrate KGs with textual pre-training corpora, showing improvements on open domain QA and LAMA probe with the augmented model 1 Both the TEKGEN training corpus and the KELM corpus are available at https://github.com/ google-research-datasets/KELM-corpus 2 TEKGEN One of the challenges in converting an entire KG to text is the wide variety of entities and relations.", "Wikidata consists of 6M entities and 1500 relations.", "In comparison, the WebNLG dataset has 600 entities and 20 relations.", "In this section, we discuss the various components of TEKGEN , also illustrated in Figure 2 1. Create a large yet noisy training dataset using distant supervision.", "2. Sequentially fine-tune T5, first on the dataset from step 1 for improved coverage, then on a small clean dataset for reduced hallucination.", "3. Build a filter for the generated text based on its semantic quality w.r.t. the KG triples.", "We first create training data using distant supervision by aligning Wikidata triples to Wikipedia text (see Figure 3).", "For each entity, we constrain the candidate sentences to the root section of its Wikipedia page", "because this section generally describes the relations of the subject entity with other entities.", "For each sentence in this section, we match all triples that have this entity as the subject.", "A triple is said to match if any alias of the object entity occurs in the sentence.", "We do not match relations to text as there are too many ways to express them.", "Constraining to the subject entity's page and root section generally ensures that the relation is expressed in the sentence if it mentions the object entity.", "Each triple can align to multiple sentences and each sentence can have multiple triples aligned to it.", "If any alias of the subject entity occurs in the given sentence, the sentence is selected as is, else the first animate third-person personal or possessive pronoun is replaced by the subject entity's canonical name.", "The pronoun replacement heuristic also works well because of this constraint.", "All triples aligned to a given sentences are combined together as a single example.", "Alignment statistics are shown in Table 1 and some alignment examples are shown in Table", "2. There are a total of 45M triples, 35% of which were aligned to sentences.", "This results in 8M examples, covering 42% of the relations.", "Note that each sentence in the aligned corpus is matched to triples with a common subject entity.", "While this results in some noise, such errors should be small due to the constraint that the text is the root section of the subject entity page.", "This constraint allows us to maintain the same property of common subject entity as the entity subgraph used in inference (3).", "It also simplified the alignment process, removing the need to match relations to text.", "In comparison, the T-REx (Elsahar et al., 2018) corpus does not have this noise due the use of typical NLP pipeline with coreference resolution and predicate linking.", "However, it suffers from Total KG triples 45,578,261 Triples aligned 16,090,457 Total sentences aligned 7,978,814 Total KG relations 1,522 Relations aligned 663 Table 1: KGText alignment statistics.", "errors due to entity linking and incorrect entailment, which are unlikely in our corpus due to this constraint.", "We extract several types of triples, each of which have slightly different matching techniques.", "Other alignment corpora built using Wikipedia hyperlinks (Chen et al., 2020; Logan et al., 2019) would miss many of these triples with entities without Wikipedia pages such as quantities, dates and certain occupations, and hence relations such as date of birth, publication year and distance from Earth.", "1. Object entity with a Wikipedia page: These are aligned by string matching all aliases.", "(e.g. Barack Obama)", "2. Object entity without a Wikipedia page: These are also aligned by matching all aliases.", "(e.g. skateboarder, professional wrestler)", "3. Object entity is a quantity: They have two components Amount and Units.", "Units are also entities in the KG and have aliases.", "We concatenate the amount with each of the unit's aliases for matching (e.g. 16 km/hr, 16 km/h, 16 kilometres per hour).", "Certain quantities do not have units (e.g. When the relation is number of episodes).", "4. Object entity is a date: Wikipedia uses only three date formats.", "2 We first find all dates in a sentence using regular expressions and parse them into a structured format containing day of the month, month and year.", "If any of these components exist in both the dates to be matched, they should match.", "For example, if the triple date has all three components but the date extracted from a sentence has only the year, then only the year needs to match.", "5. Relations with a subproperty: For instance, the relation award received has the subproperty year and the relation spouse may have the subproperty place of marriage .", "2 https://en.wikipedia.org/wiki/ Wikipedia:Date_formattings Input Triples Target Sentence Das Tagebuch der Anne Frank, (distributor, Universal Pictures), (country, Germany), (publication date, 03 March 2016) The film was theatrically released in the Germany on March 3, 2016, by Universal Pictures International.", "We retain the main triple as such and reformat the subproperty as a triple of the form (subject_entity, object_entity subproperty_name, subproperty_value) e.g. (Barack, spouse, Michelle) has the subproperty (place of marriage, Trinity Church).", "These are converted to (Barack, spouse, Michelle) and (Barack, Michelle place of marriage, Trinity Church).", "We perform a two-step sequential finetuning of the pre-trained T5-large (Raffel et al., 2020) model for converting triples to text.", "Triples are concatenated as subject relation_1 object_1,", "....relation_n object_n for input to T5.", "The model is first fine-tuned on the aligned corpus for 5000 steps to increase the coverage of entities and relations.", "However, this results in the generation of Wikipedia-like sentences and hallucination if a certain expected input triple is missing.", "For example, Wikipedia sentences generally mention date of birth , date of death , occupation together.", "If the occupation is missing in the input, the system hallucinates a random occupation.", "Neff Maiava date of birth 01 May 1924, date of death, 21 April 2018. generates Neff Maiava (1 May 1924 21 April 2018) was an Albanian actor.; hallucinating a profession.", "To overcome this, we further fine-tune the model on WebNLG 2017 data for 500 steps.", "While WebNLG has low coverage, the information in the input triples matches the target sentence exactly.", "WebNLG also has a different sentence structure than Wikipedia.", "This reduces conformity to Wikipedia sentence structure and hence reduces hallucination.", "We use a learning rate of 0.001, a batch size of 1048576 tokens and a maximum decoding length of 256.", "We perform a semantic quality based filtering of the sentences generated by the triple-to-text module.", "This is a separate post-processing module used during inference and is not jointly optimized with the text generation module.", "A semantic quality score is assigned to each generated sentence w.r.t. the input triples that indicates whether or not the generated text captures the full meaning of the triples and does not hallucinate extra information.", "The score is generated using a BERT base uncased model with input of the form [CLS] concatenated-triples [SEP] reference-or-generated-sentence .", "It is fine-tuned for 1000 steps on the WebNLG 2017 human assessment data.", "The data contains system predictions submitted to the shared task rated on a scale of 1-3 for semantics and fluency.", "We use the semantics score and scale it to 0-1.", "We also add gold references with a score of", "1. This results in 2706 examples, 90% of which are used for finetuning and the remaining for evaluation.", "High correlations are obtained between the predicted scores and human scores on the evaluation split (Table 3).", "In this section, we utilize the TEKGEN model and filtering mechanism to build a synthetic corpus that captures the KG in natural language format.", "Datasets such as WebNLG have instances with grouped triples that can be expressed as a fluent sentence.", "Such groups are not available for a large KG and using one triple at a time for inference would lead to hallucination as training uses multi-all_triple_sets {} rel_pairs {} depth 5 for all r i KG do P { ( r j , c ij ) ( r i , r j , c ij ) train_align_counts } rel _ pairs ( r i ) maxheap ( P ) end for for all entities s KG do R { ( r, o ) ( s, r, o ) KG } while R (cid:54) = do triple_set {} ( r 1 , o 1 ) random ( R ) triple_set", "ple triples per example.", "Therefore, we develop a strategy to create entity subgraphs based on relation co-occurrence counts i.e. frequency of alignment of two relations to the same sentence in the training data.", "The algorithm is shown in Figure", "4. It produces 18M entity subgraphs from 45M triples so the final corpus will have 18M generated sentences corresponding to each entity subgraph.", "For each entity subgraph, we concatenate all its triples as before.", "We perform top 5 sampling with a temperature of 0.5.", "The bottom 1% of the generated sentences are filtered out based on the semantic score assigned using the model in 2.3.", "Generation quality of the KELM Corpus is evaluated using human ratings on a random sample of 200 entity subgraphs.", "Automatic metrics such as BLEU (Papineni et al., 2002) or BERTscore (Zhang et al., 2019) cannot be used due to the lack of gold references.", "Following prior work, the generated text is rated for two aspectsfluency and semantics, on a scale of 1-5, where 1 means not at all fluent/does not capture meaning at all and 5 means completely fluent/fully captures meaning with no hallucination.", "We have eight annotators total and each example is rated by two of them.", "All annotators are linguists, NLP researchers or NLP practitioners and volunteered for the evaluation.", "We do not use any crowd sourcing platform.", "For each instance, scores of the two annotators are averaged to get the final rating.", "The Pearson correlation between the two sets of ratings is 0.56 for semantics and 0.43 for fluency.", "We compare TEKGEN to two baseline systems.", "For both baselines, we fine-tune a T5-large model only on WebNLG 2017 data but use different inference input.", "For one system, we use one triple at a time as input, resulting in 524 instances from the 200 entity subgraphs.", "For the second, we use the entity subgraphs as input, resulting in 200 instances.", "Scores are shown in Table", "4. Entity subgraphs during inference do not improve the mean scores but reduce the standard deviation of the fluency.", "In comparison, TEKGEN with inference using entity subgraphs improve both the semantics and fluency of the generated text.", "Both the mean scores are higher and the standard deviations are lower.", "It paraphrases canonical names of relations in the KG to more natural expressions more often.", "Some examples of generation using the two systems are shown in Table", "5. In the second example, the relation inception' is paraphrased to started' using WebNLG_finetuning+Triple_Inference and founded' using TEKGEN +Subgraph_Inference, the latter being more appropriate for organizations.", "line systems in which T5-large model is finetuned only on the KGText aligned corpus but use the two different inference inputssingle triple and entity subgraphs.", "One annotator rated the same sample for semantics.", "The former had an average score of 2.34 and the latter 2.73.", "Since these scores were very low, we did not pursue the evaluation of these systems further.", "The use of just the aligned corpus which is noisy to some extent results in the worst performing system out of all the methods.", "In this section, we showcase an application of the generated KELM Corpus as a way to integrate KGs into natural text corpora for pre-training language models (LMs), as shown in Figure", "5. We choose REALM (Guu et al., 2020) as a representative of the recently introduced family of retrieval language models and therefore we expect our work to be equally applicable to other such language models.", "We show gains on LAMA knowledge probe and open domain QA with augmentation.", "We also perform experiments where we integrate raw Wikidata triples instead of KELM corpus to confirm the effectiveness of verbalization.", "REALM is a retrieval-based language model and uses two corpora for pre-traininga retrieval corpus and a pre-training corpus.", "During pre-training, a sentence is selected at random from the pre-training corpus and a random word or salient span (dates and entities) is masked in this sentence.", "Then using a joint representation of the masked sentence and each of the documents in the retrieval corpus, the masked word is predicted.", "In the finetuning stage, the model is provided with a query/question as input in place of masked sentence from the pretraining corpora.", "It retrieves a small set of documents from the retrieval corpus based on the vector similarity of the query and document representation and then selects a span of text from the retrieved documents as the answer.", "We group sentences in the KELM corpus by subject entities to create 5722974 (5.7M) documents.", "We call these KELM documents.", "We then re-place/augment the retrieval corpus in REALM with these synthetic documents.", "KELM Corpus has only 286M words ( 14%) in comparison to 2B words in English Wikipedia.", "We perform evaluation using two open domain question answering datasets and one knowledge probing dataset.", "NaturalQuestions (NQ) (Kwiatkowski et al., 2019): Queries to Google and their answers.", "We keep the same settings as REALM for both NQ and WQ i.e. we work on the open domain setting for both datasets where no passage is provided as context for each question.", "Finetuning is performed on respective training splits.", "LAMA (Petroni et al., 2019): Fill-in-the-Blank style factual queries with single token answers from four sources: Google-RE, 3 T-REx (Elsahar et al., 2018), SQuAD (Rajpurkar et al., 2016) and ConceptNet (Speer and Havasi, 2012).", "Google-RE also consists of aliases of each answer.", "REALM did not include LAMA as one of its evaluation datasets.", "So we first evaluate REALM on LAMA using the original retrieval corpus and then using the KELM Corpus.", "No finetuning is performed and the masked word predictions from the pre-trained models are used as answers.", "We evaluate REALM on WQ, NQ and LAMA under three settings by modifying the retrieval corpus.", "1. ORIGINAL : Wikipedia text", "2. REPLACED : only KELM Corpus", "3. AUGMENTED : Wikipedia text + KELM Corpus The REPLACED and AUGMENTED models are evaluated using both the raw Wikidata triples and the generated sentences.", "Wikidata triples are grouped by subject entity to form Triple Documents and KELM Corpus sentences are also grouped by subject entity to form KELM Corpus Documents (4.2).", "The model is pre-trained for 200k steps with the CC-News pre-training corpus in all cases with default hyperparameters.", "ORIGINAL For NQ and WQ, we fine-tuned the pre-trained REALM on the respective training splits.", "While we were able to reproduce the accuracy on WQ, the accuracy on NQ was 1.5% absolute less than the reported accuracy (row 1&2 in Table 7).", "For LAMA probe, we first evaluated the pre-trained REALM, reporting the results on the different sub-corpora in Table 6 (row Wikipedia under REALM).", "Even the ORIGINALREALM model shows substantial improvement over prior models.", "The ability of REALM to access the corpus documents during inference not only make it interpretable but also better on the knowledge intensive tasks.", "It obtains an accuracy of 67.36% on Google-RE, 68.18% on T-REx and 27.96% on Google-RE TREx Squad Concept DOB POB POD Total 1-1 N-1 N-M Total Net Elmo 5.5B (Peters et al., 2018) 0.10 7.50 1.30 3.00 13.10 6.50 7.40 7.10 4.30 6.20 Tranformer-XL (Dai et al., 2019) 0.90 1.10 2.70 1.60 36.50 18.00 16.50 18.30 3.90 5.70 BERT-large (Devlin et al., 2019) 1.40 16.10 14.00 10.50 74.50 34.20 24.30 32.30 17.40 19.20 REALMORIGINAL Wikipedia 49.06 79.56 64.13 67.36 55.81 69.54 66.98 68.18 27.96 4.78 REPLACED Triple Documents 69.46 61.64 53.01 63.03 49.30 62.34 53.12 58.43 18.09 4.27 KELM Documents 68.91 61.37 53.79 62.81 49.41 61.60 52.50 57.76 19.07 4.26 AUGMENTED Wikipedia + Triple Documents 71.60 80.92 69.89 76.32 57.20 69.96 67.86 68.80 29.93 4.81 Wikipedia + KELM Documents 76.75 83.92 74.86 80.30 57.84 70.33 68.09 69.13 31.57 5.25 Table 6: Accuracy on LAMA probe.", "SQuAD.", "In comparison, the reported accuracy for BERT (Devlin et al., 2019) is 10.50% on Google-RE, 32.30% on T-REx and 17.40% on SQuAD.", "BERT performs better on 1-1 T-REx relations with 74.50% accuracy as compared to REALM with 55.81% accuracy.", "However, this group consists of only two relations; capital and capital of .", "BERT also has better performance than REALM on the ConceptNet subcorpus.", "On inspection of some of the queries in ConceptNet, we found the questions to be vague and possibly hard for even humans.", "For example, Raven can ___ and Time is ___ .", "REPLACED The REPLACED model which uses only KELM Corpus Documents, performs better than the ORIGINAL model on WQ but the accuracy is much lower on NQ (rows 3&4 in Table 7).", "This can be attributed to the nature of the datasetsWQ is a KG-based dataset whereas NQ consists of real queries issued to Google.", "On LAMA (rows 2&3 under REALM in Table 6), the performance is lower than the ORIGINAL model but much higher than BERT.", "Both Triple Documents and KELM Corpus Documents have similar performance.", "When using just the KG, the format doesn't matter.", "However, a system trained on raw triples may not generalize for tasks where sentence structure is important.", "AUGMENTED We observe improvements on all the datasets (last two rows of Tables 6&7) with the AUGMENTED model which uses both the Wikipedia text and the KELM Documents.", "There is an absolute gain of 2.63% and 3.10% on NQ and WQ respectively over the ORIGINAL model.", "Similarly, there is an absolute gain of 12.94%, 0.95%, 3.61% and 0.47% on Google-RE, T-REx, SQuAD and ConceptNet in LAMA respectively.", "Unlike the REPLACED model, the improvement is higher when the generated sentences in KELM Corpus are added instead of the raw Wikidata triples, con-firming the effectiveness of verbalization of KG into natural language sentences.", "Wikipedia is the dominant corpus with 2B words whereas KELM corpus sentences are succinct with a total of 286M words (4.2) so it is likely the learned representations favour the Wikipedia format which is natural language sentences.", "We expect augmenting other retrieval-based models such as DPR (Karpukhin et al., 2020) and RAG (Lewis et al., 2020) with the KELM corpus should also improve their performance, given that their enhancements are orthogonal to our contribution.", "Moreover, we note that our augmented corpus represents a scalable strategy for future QA systems; by adding only 14% more tokens to the original REALM model we outperform huge and computationally expensive models such as (Roberts et al., 2020) (11B parameters) on NQ (35.20 41.47) and WQ (42.80 43.90).", "Wikipedia is the dominant corpus with 2B words whereas KELM corpus sentences are succinct with a total of 286M words (4.2) so it is likely the learned representations favour the Wikipedia format which is natural language sentences.", "from real errors where the prediction is actually incorrect, there were some false errors that can be broadly classified into three categories", "1. Ambiguous Query: e.g. In X was born in ____, the answer could be the year or the place of birth but only one of them is acceptable depending on the subcorpus.", "2. Incomplete Answer Set: e.g. In Konstantin Mereschkowski had a career as ____, the gold target is biologist and the prediction is botanist but both should be correct.", "3. Answer granularity: The prediction is correct but more specific.", "e.g. In On the CPI scale, Kenya ranks ____, the gold answer is low but the prediction is 101 , which is in fact correct.", "Data-to-Text Generation Data-to-Text Generation has several benchmark datasets with slightly different objectivesWebNLG (Gardent et al., 2017) to convert a group of triples to text, E2ENLG (Duek et al., 2018) to convert database key-value pairs or pictures to text, WikiBio (Lebret et al., 2016) for biography generation from text, Wiseman et al. (2017) for text describing score statistics tables of basketball games, both ToTTo (Parikh et al., 2020) and DART (Radev et al., 2020) to generate text given a table and relevant highlighted cells.", "Many systems (van der Lee et al., 2018; Castro Ferreira et al., 2019; Shimorina and Gardent, 2018) have been developed and evaluated on these datasets, such as graph transformers over structured data (Koncel-Kedziorski et al., 2019), latent templates for interpretability (Wiseman et al., 2018) and text-to-text generation with T5 (Kale, 2020).", "KGText alignment T-REx (Elsahar et al., 2018) is a widely used TextKG aligned corpus, built using systems such as coreference resolution and predicate linkers (details in 2.1.1).", "Logan et al. (2019) and Chen et al. (2020) also created an aligned corpus using Wikipedia hyperlinks and coreference resolution.", "(details on comparison in 2.1.2).", "In contrast, we use alias-based heuristics coupled with source text selection constraints to generate a corpus of 16M triples aligned with 8M sentences.", "Lastly, open information extraction i.e. automatic KG construction from text (Etzioni et al., 2008; Angeli et al., 2015; Clancy et al., 2019) inherently create such a corpus but these works generally do not release the extracted KG triples.", "Incorporating KGs Most prior works on incorporating KG with text often learn KG entity representations and add them to the mention spans linked to the entity (Peters et al., 2019; Yu et al., 2020; Fvry et al., 2020) or create subgraphs relevant to the query that are expanded with text in the embedding space (Logan et al., 2019; Sun et al., 2019; Xiong et al., 2019).", "Some others incorporate additional modules.", "Verga et al. (2020) extend Fvry et al. (2020) by adding a triple memory with (subject, relation) encoding as the key and the object encoding as the value.", "Das et al. (2017) use universal schema (Riedel et al., 2013) that embeds text and KGs in a shared space for their integration.", "K M et al. (2018) learn a single representation for all the triples mentioned in a sentences during pre-training and update it further in task-specific finetuning.", "In contrast, we convert the KG into text and use it to augment the pre-training data.", "The KELM corpus sentences covers all facts in the KG but the generated sentences are limited to a given entity and its direct relations to other entities.", "For example, given the triples (X, child, Y) and (Y, child, Z), it does not the contain Z is a grandchild of X.", "More complex sentences could be generated by incorporating multi-hop relations in the KG.", "Recent work has also shown promising results on generating multilingual text from English triples (Castro Ferreira et al., 2020; Agarwal et al., 2020).", "Our proposed approach can be applied to generate a multilingual corpus of facts in various languages using English Wikidata.", "In this paper, we converted an entire KG (Wikidata) to natural text (KELM Corpus), tackling various challenges over verbalizing domain-specific benchmark datasets.", "We further showcase that KG verbalization can be used to integrate KGs and natural text corpora by including the verbalized KG as additional pre-training data.", "We augment a retrieval-based language model with the generated synthetic KELM corpus as a retrieval corpus.", "We evaluated the augmented model on open domain QA and a knowledge probe, showing improvements on both.", "The KELM Corpus is publicly available at https://github.", "com/google-research-datasets/ KELM-corpus .", "We thank William Woods, Jonni Kanerva, Tania Rojas-Esponda, Jianmo Ni, Aaron Cohen and Itai Rolnick for rating the synthetic corpus sample for human evaluation.", "We also thank Kelvin Guu for his valuable feedback on the paper." ]
[ "abstain", "abstain", "result", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "other", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "result", "other", "abstain", "other", "other" ]
[ "Distant supervision has been widely used in relation extraction tasks without hand-labeled datasets recently.", "However, the automatically constructed datasets comprise numbers of wrongly labeled negative instances due to the incompleteness of knowledge bases, which is neglected by current distant supervised methods resulting in seriously misleading in both training and testing processes.", "To address this issue, we propose a novel semi-distant supervision approach for relation extraction by constructing a small accurate dataset and properly leveraging numerous instances without relation labels.", "In our approach, we construct accurate instances by both knowledge base and entity descriptions determined to avoid wrong negative labeling and further utilize unlabeled instances sufficiently using generative adversarial network (GAN) framework.", "Experimental results on real-world datasets show that our approach can achieve significant improvements in distant supervised relation extraction over strong baselines.", "Relation extraction aims to identify relations for a pair of entities in a sentence to construct relation triples like [ Steve Jobs, Founder, Apple ].", "It has been well studied by supervised approaches with hand-labeled data.", "However, supervised methods are limited to costly hand-labeled training sets and Corresponding authors: Weijia Jia, Hai Zhao, { jia-wj, zhaohai } @cs.sjtu.edu.cn.", "This work is supported by National China 973 Project No. 2015CB352401; Chinese National Research Fund (NSFC) Key Project No. 61532013 and No. 61872239.", "0007/2018/A1, DCT-MoST Jointproject No. 025/2015/AMJ,FDCT,SAR Macau, China, and University of Macau Grant Nos: MYRG2018-00237-RTO, CPG2019-00004-FST and SRG2018-00111-FST, National Key Research and Development Program of China (No. 2017YFB0304100), Key Projects of National Natural Science Foundation of China (U1836222 and 61733011), and Key Project of National Society Science Foundation of China (No. 15-ZDA041).", "hard to be extended to large-scale relations.", "To break the bottleneck of hand-labeled training set, distant supervision (Mintz et al., 2009) automatically construct datasets with knowledge bases.", "It assumes that if two entities have a known relation in a knowledge base, all sentences that mention these two entities will probably express the same relation and can be called positive instances.", "At the same time, it treats sentences as negative instances whose entity pairs do not have a known relation in knowledge bases.", "Due to the strong assumption, instances are likely to be mislabeled.", "To alleviate the wrong labeling problem, distant supervised methods have been implemented with multi-instance learning and neural networks (Riedel et al., 2010; Hoffmann et al., 2011; Zeng et al., 2015; Lin et al., 2016, 2017).", "However, previous works focus on positive instances and few methods have addressed the issue of false negative instances.", "The false negative instances, which contain true relations, are misclassified sentences in the negative set due to the incomplete nature of knowledge bases.", "For example, over 70% of people included in Freebase have no known place of birth (Dong et al., 2014).", "As shown in Figure 1, S1 presents the relation place of birth , while it is labeled as a negative instance.", "The other three sentences are mislabeled in the same way.", "The missing relation triples in knowledge bases yield numbers of false negative instances in the automatically labeled dataset.", "These instances will not only mislead the training method to an unreliable convergence but also make the measurement criteria inaccurate in the testing process.", "Table 1 compares the precision of automatic and manual evaluation methods for top N predictions by the previous relation extractor (Lin et al., 2016) on the NYT dataset.", "From the table, we can see that manual evaluation is more precise than automatic evaluation by over 19.8%.", "The huge bias mainly comes from false negative instances in the testing set, which severely limits the upper bound of accuracy for relation extraction.", "Therefore, handling false negative instances is a pivotal issue to improve the performance of distant supervised relation extraction.", "To alleviate the effect of false negative instances, there are two possible ways.", "One is improving the accuracy of the automatically labeled dataset, and the other is properly leveraging unlabeled instances which cannot be labeled as positive or negative.", "The former way is to construct an accurate dataset by filtering credible negative instances but limited by high annotation cost and the resulting dataset size.", "The latter way is to train relation extraction models with abundant unlabeled instances but restricted by the prerequisite of an accurate dataset used as ground truth.", "Therefore, we propose a novel semi-distant supervised approach by integrating both ways to decrease the influence of false negative instances for better relation extraction.", "In our approach, we additionally use entity descriptions together with a knowledge base to construct an accurate dataset.", "Supervised by the dataset as ground truth, to effectively exploit numbers of unlabeled instances, we train our relation extractor using a generative adversarial network (GAN) framework.", "In detail, We propose a three-player min-max game to generate proper relation representations for unlabeled instances in an adversarial way which minimizes the difference between labeled and unlabeled data and maximizes the probability of distinguishing from each other at the same time.", "The experiments demonstrate that our approach is effective and outperforms the state-of-the-art work.", "In summary, we make the following major contributions: We propose a novel semi-distant supervision method for relation extraction to alleviate the influence of false negative instances.", "To the best of our knowledge, we are the first to generate valid relation representations for sentences by an adversarial algorithm.", "Numbers of unlabeled instances are used to improve the performance of relation extraction.", "Moreover, our generative adversarial training strategy is proved effective on an additional sentiment classification with sixteen real-world datasets.", "We construct a new accurate dataset for relation extraction extended from the NYT dataset.", "Our approach increases the area of the Precision-Recall curve from 0.39 to 0.56 over the baselines.", "To extend relation extraction to large-scale datasets, distant supervision (Mintz et al., 2009) automatically labeled training sets with knowledge bases such as Freebase.", "Although this method is working well for large-scale relation extraction, it is trapped in the wrong labeling problem for positive instances.", "To deal with this problem, multi-instance learning was combined with distant supervision (Riedel et al., 2010; Hoffmann et al., 2011).", "Inspired by the pioneering work, a series of later studies were conducted to further improve distant supervised relation extraction with methods such as multi-instance multi-label learning (Surdeanu et al., 2012), graph model for label generation (Takamatsu et al., 2012), partial supervision (Angeli et al., 2014), matrix completion with low-rank criterion (Fan et al., 2014) and modeling the neighbor consistency with Markov logic (Han and Sun, 2016).", "However, the performance of the methods mentioned above strongly depends on the quality of human-designed features.", "With the development of neural models, relation features with semantic meaning can be accurately, simply and automatically extracted.", "Zeng et al. (2015) proposed the first neural relation extraction with distant supervision.", "Mnih et al. (2014), Lin et al. (2016), Zhang et al. (2018), Han et al. (2018) and Du et al. (2018) showed that attention model could improve the accuracy of neural relation extraction.", "Another similar work (Ji et al., 2017) assigned better attention weights with extra data like entity descriptions.", "DSGAN (Qin et al., 2018a), a GAN-based method, was also used to recognize true positive instances from noisy datasets.", "To further alleviate the effect of wrong labeling problem, soft-label training algorithm (Liu et al., 2017b), reinforcement learning methods (Feng et al., 2018; Qin et al., 2018b) and additional side information (Vashishth et al., 2018; Wang et al., 2018) have been used.", "Most recently, a few methods focused on the pre-training embeddings for word tokens and relations including adversarial training (Wu et al., 2017), transfer learning (Liu et al., 2018) and relation decoder (Su et al., 2018).", "All the above methods mainly pay attention to positive instances.", "Whereas, few studies work on the quality of negative instances, which is exactly the focus of this paper.", "We effectively construct a reliable dataset with both entity descriptions and a knowledge base, and thus propose a novel semi-distant supervised method to extract relations precisely.", "In the distant supervised relation extraction paradigm, all sentences labeled by a relation triple constitute a bag, and each sentence is called an instance.", "The relation triple is described as [ head, relation, tail ], where head and tail are both entities.", "We extract relation features from labeled training bags and then predict relations for unseen bags in the test set.", "This section presents our method about constructing an accurate dataset, the sentence encoder for relation representation and the semi-supervised way for relation extraction.", "To reduce false negative instances, we construct a new reliable dataset extended from a widely used dataset NYT (Riedel et al., 2010) with entity descriptions.", "Entity descriptions are crawled from Wikipedia with entity name matching 1 .", "We assume that if an entity is relevant to another entity, its name is possibly mentioned in the description of the other entity.", "For example, the entity Apple Inc. is mentioned in the description of Steve Jobs .", "To verify the assumption, we count the number of all the accurate positive instances whose entity descriptions mention the other entity name in the NYT corpora.", "There are 163,108 positive sentences in total, in which 161,392 ones contain entity pairs that related to each other in their descriptions at least once.", "In other words, over 98.9% instances in positive set fitting our assumption indicates that most entity pairs in positive instances contain each other in their descriptions.", "Therefore, a former negative instance has a big chance to be credible negative if any of its entities is not mentioned in the description of the other one.", "Excluding instances that contain entity pairs related to each other in their descriptions, we can obtain more confident negative instances.", "Finally, we filter credible positive and negative instances from the dataset, and the other instances are unlabeled ones that cannot be labeled as positive or negative.", "We pre-train input embeddings of word tokens including word and position embeddings.", "Word embeddings are distributed representations that map each word to a vector word R w , where the parameter w indicates the dimension of the vector.", "The vectors are trained in advance by word 2 vec in the setting of Skip-gram (Mikolov et al., 2013).", "In the task of relation extraction, the relative positions of input tokens are important information.", "1 One entity name may refer to multiple entities which have their own pages.", "In our work, all the matched pages are collected together to obtain its description.", "Position embeddings are defined as the combination of the relative distances from the current word to head and tail .", "For instance, the relative distances from [ co-founder ] to [ Steve Jobs ] and [ Apple ] are respectively 3 and -6 in the sentence Steve Jobs was the co-founder and the CEO of the Apple .", "We encode distances to vectors position R p , where p is the dimension.", "The position embeddings are initialized randomly and updated in the training process.", "Finally, word embeddings and position embeddings are concatenated together to feed the neural model.", "We denote all the words in an instance as an initial vector sequence b = { x 1 , , x i , , x q } , where x i R w + p and q is the number of words in the instance b .", "Convolutional Neural Network (CNN) is a widely used structure for sentence encoder as shown in Figure", "2. With the input embeddings, the convolutional layer extracts local features with a sliding window of length k over the input tokens.", "In the figure, we extract local features from 3 ( k = 3 ) adjacent word tokens with dot production between convolutional kernels and input embeddings.", "The convolutional kernels are weight vectors represented by W R d k ( w + p ) and the number of kernels is d .", "In summary, the convolutional operation follows the equation, f ij = W i [ x j 1 ; x j ; x j +1 ] , (1) where [ x ; y ] denotes the vertical concatenation of x and y .", "f ij presents j -th value of the i th filter, where i and j are in range [1 , d ] and [1 , q ] respectively.", "Out-of-range input values such as x 0 and x q +1 are taken to be zero.", "A max-pooling operation selects the most important features of each f i with f i = max ( f ij ) , where f R d .", "Furthermore, PCNN (Zeng et al., 2015) improves the max-pooling operation with a piecewise method whose outputs of convolutional fil-ters are divided into three segments by head and tail entities.", "Therefore, the max pooling procedure is performed in three segments separately.", "Then, we summarize f to h by a non-linear function such as the hyperbolic tangent.", "The final feature vector h is fed into output layer after the softmax method p = softmax ( W r h + b r ) , where W r R z d and b r R z are variables, p R z is the estimated probability for each class and z is the number of relations.", "A cost function for one instance is the negative log-likelihood of the relations, J truth ( p, y, ) = 1 z z (cid:88) j =1 y j log p j , (2) where y R z is the one-hot represented ground truth and presents all the parameters.", "The architecture of our semi-distant supervision is shown in Figure", "3. To sufficiently utilize the reconstructing dataset including accurately labeled instances and unlabeled ones, we propose a generative adversarial training strategy, which transforms unlabeled instances ( x ul ) to labeled data ( x l ) space by generating valid relation representations ( x gen ) and making the distribution of labeled instances p ( x l ) equal to that of generative data p ( x gen ) in relation space 2 .", "Inspired by Goodfel-low et al. (2014), we further devise a three-player min-max game to generate valid data distribution p ( x gen ) with sentence encoder, generative and discriminative modules.", "The generative module minimizes the difference of p ( x l ) and p ( x gen ) , and the discriminative module maximizes the probability of distinguishing from each other at the same 2 p ( x l ) and p ( x gen ) represents the data distribution of labeled and generative instances.", "time.", "Sentence encoder is proposed as the third player, which extracts relation features from all the instances and produces a pre-trained relation representation for unlabeled instances.", "With the sentence encoder, we can control relation features contained in the generated representations.", "Therefore, the discriminative module D will try to distinguish labeled data from generative data, while the generative module G makes p ( x gen ) p ( x l ) .", "In addition, the sentence encoder S extracts relation features with all the training instances p all .", "The training procedure is a three-player min-max game as the following equation, min S,G max DV ( S, D, G ) = E x p all [log S ( x )] + E x p xl [log D ( x )] + E c p xgen [log(1 D ( G ( c )))] , (3) In generative adversarial training, the discriminative module is trained by maximizing the gap between labeled data and generative data with the following equation, JD ( x, c, d ) = log D ( x ) + log(1 D ( G ( c ))) , (4) where x and c are instances from accurately labeled set ( x l ) and unlabeled set ( x ul ) respectively.", "d presents parameters for the discriminator.", "D ( x ) and D ( G ( c )) are defined as follows, D ( x ) = ( W d h x ) , (5) D ( G ( c )) = ( W d ( h c + W g )) , (6) where W d and W g are variables for discriminative and generative modules respectively.", "is the sigmoid function.", "The generative module is trained to make the generated relation representations more similar to real ones by the following loss function, where g presents parameters.", "JG ( c, g ) = log(1 D ( G ( c ))) (7) Finally, we train our sentence encoder S by optimizing the following loss function, JS ( x, c, s ) = 1 z z (cid:88) j =1 y lj log p ( y j | x ) 1 z z (cid:88) j =1 y gj log p ( y j | G ( c )) , (8) where y g means a one-hot vector which labels the most possible relation for unlabeled instances generated by the sentence encoder.", "s represents parameters of S .", "The complete training procedure for generative adversarial training is shown as Algorithm", "1. 4 Experiments The experiments are proposed to answer the following three questions, 1) Is the proposed semi-distant supervision method effective for the task of relation extraction?", "2) Is the constructed dataset credible enough?", "3) Is the generative adversarial training helpful to relation extraction and other semi-supervised tasks?", "We conduct experiments on a widely used dataset NYT (Riedel et al., 2010) and its new version Accurate-NYT (A-NYT).", "A-NYT is a credible dataset filtered by our data construction module.", "We follow the previous work (Lin et al., 2016) to partition training and testing sets for NYT and A-NYT.", "Besides, we apply sixteen real-world datasets 3 (Liu et al., 2017a) to further verify the effectiveness of our generative adversarial training strategy on the task of sentiment classification.", "The dataset details are shown in Table", "2. 3 The datasets are Amazon product reviews and movie reviews.", "On the dataset NYT and A-NYT, we evaluate our method in the classical held-out evaluation.", "It evaluates our models by comparing relation facts discovered from the test sentences with those in Freebase.", "Specifically, we report both the aggregate Precision-Recall (PR) curves and Precision at top N predictions (P@N) in our experiments.", "For the other datasets, we compute the precision of all the predictions.", "supervised relation extraction.", "Zeng et al. (2015) extracted relation features with piecewise convolutional neural network (PCNN).", "Lin et al. (2016) integrated PCNN with selective attention mechanism (PCNN+ATT).", "Wu et al. (2017) added adversarial noise at the level of the word embeddings (PCNN+ATT+AT).", "Liu et al. (2017b) relabeled the training instances dynamically by the relation extractor (PCNN+ATT+SL).", "Liu et al. (2018) shortened the training instances with the parser tree and pre-trained word embeddings with transfer learning, which is the latest state-of-the-art work.", "Self-Training (ST) is a semi-supervised method that can be integrated with PCNN+ATT for unlabeled data, which generates relation types for unlabeled instances with the model itself.", "In our experiments, we use the word 2 vec in the setting of Skip-gram to train the word embeddings on NYT set.", "To train our model efficiently, we iterate by randomly selecting a batch from the training set until convergence and apply sentence-level attention mechanism following the previous work (Lin et al., 2016).", "The parameter n and m are batch sizes for accurate and unlabeled datasets respectively.", "We update the gradient with adaptive moment estimation (Kingma and Ba, 2015).", "Furthermore, L 2 regularization and dropout (Srivas-tava et al., 2014) are adopted to avoid overfitting.", "Finally, we use a grid search and cross-validation to determine the optional parameters as shown in Table", "3. The hyper-parameters s i , s j and s k are training steps for different modules of generative adversarial training.", "Since the other parameters have little effect on the results, we follow the settings as the previous work (Lin et al., 2016).", "The overall performance of our method compared with baselines for distant supervised relation extraction is shown in Table", "4. We can see that our semi-distant supervised method achieves much better results than the baselines on all metrics.", "The huge improvement comes from both the accurate dataset and the effective training strategy which leverages unlabeled instances properly.", "In this section, we apply two previous methods Zeng et al. (2015) and Lin et al. (2016) on NYT and A-NYT.", "PR curves for NYT are reported in their papers, while PR curves for A-NYT come from our implementations of the two baselines.", "NYT and A-NYT share the same positive instances, while A-NYT set has less and credible negative instances.", "As shown in Figure", "4(a), P@N 100 200 300 Mean PR Zeng et al. (2015) 72.3 69.7 64.1 68.7 0.33 Lin et al. (2016) 76.2 73.1 67.4 72.2 0.35 Wu et al. (2017) 81.0 74.5 71.7 75.7 0.34 Liu et al. (2017b) 87.0 84.5 77.0 82.8 0.34 Qin et al. (2018a) 78.0 75.5 72.3 75.3 0.35 Liu et al. (2018) 87.0 83.0 78.0 82.7 0.39 Our Method 96.0 93.5 93.0 94.2 0.56 Table 4: Overall performance at P@Ns(%) and PR curve areas methods on A-NYT always obtain better performance.", "The huge gap between PR curves is caused by false negative instances in NYT, which are not used for training and testing in A-NYT.", "To prove that results on A-NYT are according to the actual situation, we do manual evaluations at P@Ns.", "As shown in Table 5, the huge bias caused by false negative instances on NYT is dramatically alleviated on the dataset A-NYT.", "To further demonstrate the effectiveness of our training strategies, we compare Generative Adversarial Training (GAT) with other baselines on the partially labeled dataset A-NYT as shown in Figure", "4(b).", "The figure gives the following insights, 1) PCNN+ATT+ST and PCNN+ATT+AT do not work well, which is caused by the low quality of unlabeled instances.", "2) PCNN+ATT+SL works as well as our models at low recall rate because of its excellent ability to extract notable features.", "Unfortunately, it falls far behind all the baselines at high recall rate, which means it tends to converge to a local optimum.", "3) Our model achieves solid PR curves at all range of recall rate.", "Meanwhile, we propose a detailed comparison of baselines with P@Ns and PR curve areas as shown in Table 6.", "From the table, we can see that our training strategy achieves much better result than the other baselines, which indicates that abundant unlabeled instances are helpful to extract relations only if used appropriately.", "Going", "deeper in the table, PCNN+ATT+SL works well at top predictions but obtains the worst PR curve.", "Our semi-distant supervised model with adversarial generations is useful for leveraging unlabeled instances properly.", "To verify the expandability of generative adversarial training, we conduct additional experiments on the task of sentiment classification.", "We implement our model and three baselines based on the Long Short-Term Memory (LSTM) network 5 .", "The results are shown in Table 7, from which we 4 Results on NYT are reported as their papers except Wu et al. (2017) and Qin et al. (2018a).", "Results of these two methods on NYT and all results on A-NYT are obtained by our implementations.", "5 Our generative adversarial training strategy is model-independent, meaning that it could be applied to other neural models.", "see that, 1) Self-training obtains poor results compared with the basic LSTM model, which means they fail to utilize unlabeled data correctly.", "2) Adversarial training improves the performance on only three of the datasets and performs poorly on others, which means they possibly rely on the quality of unlabeled data.", "3) LSTM+GAT achieves better results than the baselines on most of the datasets because of generating high-quality representations for unlabeled sentences.", "In this paper, we propose a novel semi-distant supervision approach that is capable of jointly exploiting limited accurate and abundant unlabeled ones.", "We first construct a reliable dataset with a knowledge base and additional entity descriptions.", "With the dataset, the generative adversarial training strategy is proposed to deal with plenty of unlabeled instances, which generates valid relation representations.", "Our experiments show that the proposed approach achieves significant improvement over previous state-of-the-art baselines." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "objective" ]
[ "Adversarial robustness has attracted much attention recently, and the mainstream solution is adversarial training.", "However, the tradition of generating adversarial perturbations for each input embedding (in the settings of NLP) scales up the training computational complexity by the number of gradient steps it takes to obtain the adversarial samples.", "To address this problem, we leverage Flooding method which primarily aims at better generalization and we find promising in defending adversarial attacks.", "We further propose an effective criterion to bring hyper-parameter-dependent flooding into effect with a narrowed-down search space by measuring how the gradient steps taken within one epoch affect the loss of each batch.", "Our approach requires zero adversarial sample for training, and its time consumption is equivalent to fine-tuning, which can be 2-15 times faster than standard adversarial training.", "We experimentally show that our method improves BERT's resistance to textual adversarial attacks by a large margin, and achieves state-of-the-art robust accuracy on various text classification and GLUE tasks.", "Despite their impressive performances on various NLP tasks, deep neural networks such as BERT (Devlin et al., 2019) suffer a sharp performance degradation against deliberately constructed adversarial attacks (Zeng et al., 2021; Wang et al., 2021b; Nie et al., 2020; Zang et al., 2020; Ren et al., 2019; Zhang et al., 2019).", "A line of work attempts to alleviate this problem by creating adversarially robust models via defense methods, including adversarial data augmentation (Chen et al., 2021; Si et al., 2021), regularization (Wang et al., 2021a), and adversarial training (Wang et al., 2020; Zhu et al., 2020; Madry et al., 2018).", "Data augmentation Equal Contribution.", "and adversarial training rely on additional adversarial examples generated either by hand-crafting or conducting gradient ascent on the clean data for virtual adversarial samples.", "However, generating adversarial examples scales up the cost of training computationally, which makes vanilla adversarial training almost impractical on large-scale NLP tasks like QNLI (Question-answering NLI, Rajpurkar et al., 2016).", "An increasing amount of researchers express their concern about the time-consuming property of standard adversarial training and offer cheaper but competitive alternatives by", "(i) replacing the perturbation generation with an additional generator network (Baluja and Fischer, 2017; Xiao et al., 2018), or by", "(ii) combining the gradient computation of clean data and perturbations into one backward pass (Shafahi et al., 2019).", "These approaches still rely on additional adversarial examples generated either by the model itself or by an extra module.", "In this work, we propose a novel method, Flooding-X, to largely improve adversarial robustness without any adversarial examples, maintaining the same computational cost as conventional BERT fine-tuning.", "The vanilla Flooding (Ishida et al., 2020) method is a practical regularization technique to boost model generalization by preventing further reduction of the training loss when it reaches a reasonably small value .", "It results in a model performing normal gradient descent when training loss is above the decided value but gradient ascent when below.", "By continuing to random walk with the same non-zero value as a virtual loss, the model drifts into an area with a flat loss landscape that is claimed to lead to better generalization (Ishida et al., 2020).", "Interestingly, we find that Flooding method is also promising in increasing models' resistance to adversarial attacks.", "Despite the significant rise in robust accuracy, the so-called reasonably small value, which is a hyper-parameter, takes effort to be found and varies for 5634 each dataset, which requires an overly extensive search among the numerous candidates.", "In an attempt to narrow down the candidates of hyper-parameter, we propose gradient accordance as an informative criterion for optimal values that bring Flooding into effect, which is used as a building-block in Flooding-X.", "We measure how accordant the gradients of the batches are by analyzing how the gradient descent steps based on part of an epoch affect the loss of each batch.", "Gradient accordance is computationally friendly and is tractable during training process.", "Experiments on various tasks show a close relation between gradient accordance and overfitting.", "As a result, we propose gradient accordance as a reliable flooding criterion to make the training loss flood around the level when the model has nearly overfitted.", "That is to say, we leverage the training loss of the model right before overfitting as the value of flood level.", "Flooding-X is especially useful and shows great advantage over adversarial training in terms of computational cost when the training dataset is relatively large.", "Experimental results demonstrate that our method achieves stated-of-the-art robust accuracy with BERT on various tasks and improves its robust accuracy by 100 to 400% without using any adversarial example, consuming any extra training time, or conducting overly extensive search for hyper-parameter.", "Our main contributions are as follows.", "1) We analyze and demonstrate the effectiveness of Flooding, which is designed for generalization, in improving adversarial robustness especially in NLP domain.", "2) We propose a promising indicator, i.e. gradient accordance, to alleviate Flooding method from tedious search of the hyper-parameter.", "3) We conduct comprehensive experiments on NLP tasks to illustrate the potential of Flooding for improving BERT's adversarial robustness.", "We first describe the vanilla Flooding regularization method (Ishida et al., 2020) for alleviating overfitting via keeping training loss from reducing to zero.", "Under the main assumption that learning until zero loss is harmful, Ishida et al. (2020) propose Flooding to intentionally prevent further reduction of the training loss when it reaches a Figure 1: Input loss landscape of vanilla BERT and different adversarial training algorithms under Gaussian random noise of standard deviation on SST-2 dataset.", "reasonably small value, which is called the flood level .", "Intuitively, this approach makes the training loss float around the pre-defined flood level and alter from normal mini-batch gradient descent to gradient ascent if the loss is below the flood level.", "With the constraint of flood level, the model will continue to random walk around the non-zero training loss, which is expected to reach a flat loss landscape.", "where J denotes the original learning objective, and (cid:101) J represents the modified learning objective with flooding.", "The positive value b is the flood level specified by user, and is the model parameter.", "Accordingly, the flooded empirical risk is then defined as (cid:101) R ( f ) = | (cid:98) R ( f ) b | + b, (2) within which (cid:98) R ( f ) / (cid:101) R ( f ) denotes the original / flooded empirical risk respectively, and f refers to the score function to be learned by the model.", "During the back propagation process, the gradient of (cid:98) R ( f ) w.r.t. model parameters and (cid:101) R ( f ) point to the same direction when (cid:101) R ( f ) is above b but to the opposite direction when it is below b .", "As a result, model performs normal gradient descent when the learning objective is above the flood level, and gradient ascent when below.", "Flooding is designed for overfitting, but why is it valid for adversarial robustness?", "According to the definition described in the previous section, Flooding does not make any difference to the 5635 training process when the loss is beyond the flood level.", "When the training loss approaches the flood level, on closer inspection, gradient descent and gradient ascent begin to alternate.", "Assume that the model with learning rate performs gradient descent for the n -th batch and then gradient ascent for batch n + 1 , which results in: n = n 1 g ( n 1 ) , n +1 = n + g ( n ) .", "(3) In the equations above, g ( ) = J ( ) is the gradient of J ( ) w.r.t. model parameters.", "We can then get n +1 = n 1 g ( n 1 ) + g (cid:0) n 1 g ( n 1 ) (cid:1) , (4) which is, by Taylor expansion, approximately equivalent to = n 1 2 2 g ( n 1 ) 2 .", "(5) Thus, theoretically, when the training loss is relatively low, the model alters into a new learning mode where the learning rate is 2 / 2 and the objective is to minimize g ( ) 2 .", "Generally, the flooded model is guided into an area with a smooth parameter landscape that leads to better adversarial robustness (Stutz et al., 2021; Prabhu et al., 2019; Yu et al., 2018; Li et al., 2018a).", "As is demonstrated in Figure 1, adversarial training brings about a smoother loss change to the model when the input embedding is perturbed by Gaussian random noise, which is closely related the stronger adversarial robustness.", "Among all the training methods involved in Figure 1, Flooding-X leads to the most smooth loss landscape, indicating an overall more robust model against attacks.", "Despite its potential in boosting model's resistance to adversarial attacks, the optimal flood level has to be searched by performing exhaustive search within a wide range at tiny steps, which is not easily at hand.", "A relatively large value of flood level lengthens the gradient steps and keeps the model from convergence, while a tiny value causes hardly any difference to the training process.", "The effect of Flooding deeply relies on the flood level, which, at the same time, is also sensitive to the subtle change of this hyper-parameter.", "Figure 2 reveals that even Figure 2: Influence of different flood levels on performance of the trained BERT on SST-2 against the attack of TextFooler (Jin et al., 2020).", "a slight change on the value of flood level can make a huge difference on the adversarial robustness of the so-trained model.", "In an attempt to ease the effort of searching and make the best of Flooding, we propose a promising and reliable criterion to narrow down the search space, which is described in detail in the next section.", "Since Flooding is proposed as an attempt to avoid overfitting, we intuitively suppose that the optimal flood level would be found at the stage when the model is about to overfit.", "That is, we leverage the training loss before overfitting as the flood level.", "Inspired by influence function (Koh and Liang, 2017), we propose gradient accordance as a criterion for flooding, which is empirically proved to be reliable and indicative.", "We consider the effect of the model updated w.r.t. one epoch on each of its batches as a signal of overfitting.", "As is indicated by its name, this criterion measures the relation among the gradients of each batch on epoch level, evaluating whether the model updated on an epoch has the same positive effect on the batches on average.", "Now we provide the formal definition of gradient accordance.", "We denote a model as a functional approximation f which is parameterized by .", "Consider a training data point x with the ground truth label y , which results in a loss L ( f ( , x ) , y ) .", "The gradient of the loss w.r.t. the parameters is thus g = L ( f ( , x ) , y ) , (6) 5636 whose negation denotes the direction in which the parameters are updated to better correspond to the desired outputs on the training data (Fort et al., 2019).", "Now let's consider two data points x 1 and x 2 with their corresponding labels y 1 and y 2 .", "According to the definition above, the gradient of sample 1 is g 1 = L ( f ( , x 1 ) , y 1 ) .", "We try to inspect how the small change of in the direction g 1 influences the loss on sample x 1 or x 2 : L 1 = L ( f ( g 1 , x 1 ) , y 1 ) L ( f ( , x 1 ) , y 1 ) , (7) where f ( , x 1 ) can be expanded by Taylor expansion to be: f ( , x 1 ) = f ( g 1 , x 1 ) + g 1 f + O ( 2 ) .", "(8) Here, we refer to ( g 1 f + O ( 2 )) as T ( x 1 ) ; and by repeating the similar expansion we can get L ( f ( , x 1 ) , y 1 ) = L ( f ( g 1 , x 1 ) + T ( x 1 ) , y 1 ) = L ( f ( g 1 , x 1 ) , y 1 ) + L f T ( x 1 ) + O ( T 2 ( x 1 )) .", "(9) Equation (7) is thus equal to L 1 = L f T ( x 1 ) O ( T 2 ( x 1 )) = L f ( g 1 f + O ( 2 )) = g 1 g 1 O ( 2 ) .", "(10)", "Similarly, the change of the loss on x 2 caused by the gradient update by x 1 is L 2 = g 1 g 2 O ( 2 ) .", "Notably, L 1 is negative by definition since the model is updated with respect to x 1 and naturally leads to a decrease on its loss.", "The model updated on x 1 is considered to have a positive effect on x 2 if L 2 is also negative while an opposite effect if positive.", "The equations above demonstrate that this co-relation is equivalent to the overlap between the gradients of the two data points g 1 g 2 , which we hereafter refer to as gradient accordance .", "Data-point-level gradient accordance is too fine-grained to be tractable in practice.", "Thus, we attempt to scale it up and result in coarse-grain gradient accordance at batch level, which is computationally tractable and still reliable as a criterion for overfitting.", "Consider a training batch B 0 with n samples X = { x 1 , x 2 , . . . , x n } and labels y = { y 1 , y 2 , . . . , y n } of k classes { c 1 , c 2 , . . . , c k } .", "These samples can be divided into k groups according to their labels X = X 1 X 2 X k , and so are the labels y = (cid:83) ki =1 y i , where all the samples in X m belong to class c m .", "Thus, we have the sub-batch B 10 = { X 1 , y 1 } .", "We then define class accordance score of two sub-batches B 10 and B 20 of classes c 1 and c 2 as: C ( B 10 , B 20 ) = E [ cos ( g 1 , g 2 )] , (11) where g 1 is the gradient of the training loss of sub-batch B 10 w.r.t. the model parameters, and cos ( g 1 , g 2 ) = ( g 1 / | g 1 | ) ( g 2 / | g 2 | ) .", "Class accordance measures whether the gradient taken with respect to a sub-batch B 10 of class c 1 will also decrease the loss for samples in another sub-batch B 20 of class c 2 (Fort et al., 2019; Fu et al., 2020).", "Further consider that there are N batches in one training epoch and the training samples are of k classes.", "The batch accordance score between batches B s and B t is defined as S batchaccd ( B s , B t ) = 1 k ( k 1) k (cid:88) j =1 k (cid:88) i =1 i = j C ( B is , B jt ) .", "(12)", "Batch accordance quantifies the learning consistency of two batches by evaluating how the model updated on one batch affects the other.", "To be more specific, a positive batch accordance denotes that the measured two batches are under the same learning pace since the model updated according to each batch benefits them both.", "The gradient accordance of certain epoch (or a part of an epoch, namely the sub-epoch, which can be several batch iterations) is finally defined as S epochaccd = 1 N ( N 1) N (cid:88) t = s +1 N 1 (cid:88) s =1 S batchaccd ( B s , B t ) .", "Gradient accordance scales the batch accordance score up from a measure of two batches to that of a sub-epoch.", "Criterion for Flooding A positive gradient accordance means that the model performed gradient descent w.r.t. the certain epoch decreases the loss of its batches on average, indicating that the learning 5637 pace of most batches are in line with each other.", "A negative one means that the model has overfitted to some of the training batches since the update of one epoch increases the loss of its batches on average, which is right the stage we would like to identify for the model by gradient accordance.", "We assume that the optimal flood level lies in the range of the training loss of a model when it is about to overfit.", "In the following section, we empirically prove that gradient accordance is a reliable and promising criterion for flooding.", "In this section, we provide comprehensive analysis on Flooding-X through extensive experiments on five text classification datasets of various tasks and scales: SST (Socher et al., 2013), MRPC (Dolan and Brockett, 2005), QNLI (Rajpurkar et al., 2016), IMDB (Maas et al., 2011) and AG News (Zhang et al., 2015).", "The statistics of these involved datasets are illustrated in Table 1, including the volume of training / test set and the average word count of the training samples.", "We conduct experiments on BERT-base (Devlin et al., 2019) and compare robust accuracy of Flooding-X with other adversarial training algorithms to demonstrate its strength.", "We implement all models in MindSpore.", "We compare Flooding-X with three adversarial training algorithms, one regularization method well as the vanilla Flooding.", "PGD Projected gradient descent (PGD, Madry et al., 2018) formulates adversarial training algorithms into solving a minimax problem that minimizes the empirical loss on adversarial examples that can lead to maximized adversarial risk.", "By adding adversarial perturbations to word em-beddings, FreeLB generates virtual adversarial samples inside the region around input samples.", "TAVAT Token-Aware Virtual Adversarial Training (TAVAT, Li and Qiu, 2021) aims at fine-grained perturbations, leveraging a token-level accumulated perturbation vocabulary to initialize the perturbations better and constraining them within a token-level normalization ball.", "InfoBERT InfoBERT (Wang et al., 2021a) leverages two mutual-information-based regularizers for robust model training, suppressing noisy mutual information while increasing mutual information between local stable features and global features.", "Flooding Flooding (Ishida et al., 2020) has been introduced in detail in section 2.1.", "We implemented Flooding and search for the flooding level at the step of 0.01 according to the tradition.", "The best result for each dataset is reported.", "Three well-received attack methods are leveraged via TextAttack (Morris et al., 2020) for an extensive comparison between our proposed method and baseline algorithms.", "TextFooler (Jin et al., 2020) identifies the important words for target model and repeats replacing them with synonyms until the prediction of the model is altered.", "Similarly, TextBugger (Li et al., 2018b) also searches for important words and modifies them by choosing an optimal perturbation from the generated several kinds of perturbations.", "BERTAttack (Li et al., 2020) applies BERT in a semantic-preserving way to generate substitutes for the vulnerable words detected in the given input.", "We consider four evaluation metrics to measure BERT's resistance to the mentioned adversarial attacks under different defence algorithms.", "Aua% Accuracy under attack measures the model's prediction accuracy on the adversarial data deliberately generated by certain attack method.", "A higher Aua% means a more robust model and a better defender.", "Suc% Attack success rate is evaluated by the ratio of the number of texts successfully perturbed by a specific attack method to the number of all 5638 the involved texts.", "Robust models are expected to score low on Suc% .", "#Query Number of queries denotes the average attempts the attacker queries the target model.", "The larger the number is, the harder the model is to be attacked.", "All the baseline methods are re-implemented based on their open-released codes and the results are competing to those reported.", "We train our models on NVIDIA RTX 3090 and RTX 2080Ti GPUs, depending on the volume of the dataset involved.", "Most of the parameters such as learning rate and warm-up step are in line with vanilla BERT (Devlin et al., 2019) and the baseline methods.", "For all of the adversarial methods we set the training step to be 5 for a fair comparison, which is a trade-off between training cost and model performance .", "The clean accuracy ( Clean % ) is tested on the whole test dataset.", "The other three metrics (e.g., Aua % , Suc % and #Query ) are evaluated on the whole test dataset for SST-2 and MRPC, and 800 randomly chosen samples for IMDB, AG NEWS, and QNLI.", "We train 10 epochs for each model on each dataset, among which the last epochs are selected for the comparison of adversarial robustness.", "The extensive results of all the above mentioned methods are summarized in Table 2.", "Generally, our Flooding-X method improves BERT by a large margin in terms of its resistance to adversarial attacks, surpassing the baseline adversarial training algorithms on most datasets under different attack methods.", "Under TextFooler attack (Jin et al., 2020), our algorithm reaches the best robust performance on four datasets: IMDB, AG News, SST-2, and MRPC.", "We observe that Flooding is more effective on smaller datasets than larger ones, since the smaller datasets with shorter training sentences are easier to be memorized by the neural network and are more likely to cause overfitting.", "On QNLI dataset where Flooding-X fails to win, the accuracy under attack is only 0.2 points lower than the 5 -step PGD.", "This might be explained by the mild change in gradient accordance during training on QNLI dataset, in which case the precise stage of overfitting is hard to be identified.", "Though we believe that a better value of flood level exists and can further boost the performance, we refuse to take on the pattern of extensive hyper-parameter searching which is against the original purpose of Flooding-X.", "Notably, our method performs better than the baseline adversarial training methods by 5 to 20 points on average even without using any adversarial examples as training source, not to mention the vanilla BERT.", "Under most cases, our method remains the best performing algorithm against BERTAttack (Li et al., 2020) and TextBugger (Li et al., 2018b).", "This proves that our method maintains effectiveness under different kinds of adversarial attacks.", "As a byproduct, the clean accuracy of our method is also competing among the baseline methods, which is inherent to the vanilla Flooding that aims at better generalization.", "In this section, we construct supplementary experiments to further analyze the effectiveness of Flooding-X and its building block, i.e., gradient accordance.", "Influence function (Koh and Liang, 2017) inspects the influence of one single training data on the model prediction and stiffness (Fort et al., 2019) measures how the model updated according to one sample affects the model prediction on another.", "Based on these two works, gradient accordance is proposed as a means for identifying model overfitting at sub-epoch level.", "As seen in Figure 3, during training process, the turning point of gradient accordance from negative to positive closely matches the point when the test loss is about to increase, which is well received as a signal of overfitting.", "Since it is computationally intractable to calculate gradient accordance after trained on every single batch, we can only figure out the range where the model is about to overfit by computing gradient accordance at sub-epoch level.", "Despite its outstanding performance of the last training epoch, we find that Flooding-X boosts the robustness of model at an earlier stage than standard fine-tuning and adversarial training methods like FreeLB.", "As is shown in Figure 4, Flooding-X improves BERT's adversarial robustness to a relatively high level at epoch 5 , which is competitive with that of standard fine-tuning at the last epoch.", "Besides, Flooding-X accelerates the increase of robustness at late training stage.", "Starting from epoch 7 our method enables a steep increment on the accuracy under attack, which is due to the effect of Flooding that forces the model to perform a more fierce random walk since the training loss of most batches are going below the flooding level.", "It is also demonstrated that the training loss stops approaching zero under the constraint of Flooding-X, while the standard fine-tuning and adversarial training continues to decrease the training loss towards zero which brings about the risk of overfitting.", "To further reveal the strength of Flooding besides its robustness performance, we compare its GPU training time consumption with baseline methods on several datasets of different sizes.", "For a fair comparison, every model of each dataset is trained on single NVIDIA RTX 2080Ti GPU with the same batch size, among which models on SST-2 are trained with a batch size of 32 while QNLI 5640 Figure 4: Loss and Aua % (accuracy under attack) of BERT trained on SST-2 under different methods.", "and IMDB are trained with 8 and 4 respectively since the training sentences are way longer than SST-2.", "As is demonstrated in Table 3, the time consumption (seconds) of Flooding is competitive with standard fine-tuning, which is far less than that of adversarial training algorithms.", "Adversarial Training Adversarial training (AT) is a well-received method for defending adversarial attacks.", "As an attempt against adversarial attacks, AT generates gradient-based adversarial samples and leverage them for further training (Goodfellow et al., 2015).", "A line of work tries different means for the generation of adversarial examples.", "The PGD algorithm (Madry et al., 2018), compared as a baseline method in our experiments, involves multiple projected gradient ascent steps to find the adversarial perturbations which are then used for updating the model parameters.", "However, it is computationally expensive and has aroused many attempts to cut down on the cost.", "Shafahi et al. (2019) and Zhu et al. (2020) focus on finding better adversarial sample while maintaining a low cost.", "Despite gradient-based methods which generates adversarial perturbations on the continuous input embedding, some works tailor AT for NLP fields.", "The adversarial examples are generated by replacing the original texts based on certain rules such as semantic similarity (Alzantot et al., 2018; Jin et al., 2020; Li et al., 2020).", "Ebrahimi et al. (2018) propose a perturbation strategy that conducts character insertion, deletion, and replacement.", "The mentioned algorithms of AT generates additional adversarial examples either by calculating gradients or by human force, which is computationally expensive and effort taking.", "Overfitting and Criterion Deep neural networks are shown to suffer from overfitting to training configurations and memorise training scenarios (Takeoka et al., 2021; Rodriguez et al., 2021; Roelofs et al., 2019; Werpachowski et al., 2019), which leads to poor generalization and vulnerability towards adversarial perturbations.", "One way of identifying overfitting is to see whether the generalization gap, i.e., the test minus the training loss, is increasing or not (Salakhutdinov, 2014).", "Ishida et al. (2020) further decompose the situation of the generalization gap increasing into two stages with regard to the change of both training and test losses (Zhang et al., 2021; Belkin et al., 2018; Arpit et al., 2017).", "Derived from influence function (Koh and Liang, 2017), Fort et al. (2019) propose the concept of Stiffness as a new perspective of generalization.", "They measure how stiff a network is by looking at how a small gradient step in the network parameters on one example affects the loss on another example.", "However, from the practical perspective, it is computationally intractable to compute the stiffness between every single sample during the process of standard training where thousands of samples are involved in one batch.", "In this work, we propose Flooding-X as an efficient and computational-friendly algorithm for improving BERT's resistance to adversarial attacks.", "We first theoretically prove that the vanilla Flooding method is able to boost model's adversarial robustness by leading it into a smooth parameter landscape.", "We further propose a promising and 5641 computationally tractable criterion, Gradient Accordance, to detect when the model is about to overfit and accordingly narrow down the hyper-parameter space for Flooding with an optimal flood level guaranteed.", "Experimental results prove that gradient accordance is closely related with the phenomenon of overfitting, equipped with which Flooding-X beats the well-received adversarial training methods and achieves state-of-the-art performances on various NLP tasks against different textual attack methods.", "This implies that adversarial examples, either generated by gradient-based algorithms or human efforts, are not a must for the improvement of adversarial robustness.", "We call for further exploration and deeper understanding in the nature of adversarial robustness and attacks.", "The authors wish to thank the anonymous reviewers for their helpful comments.", "This work was partially funded by National Natural Science Foundation of China (No. 62076069, 61976056).", "This research was sponsored by Hikvision Cooperation Fund, Beijing Academy of Artificial Intelligence (BAAI), and CAAI-Huawei MindSpore Open Fund." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Despite the achievements of large-scale multimodal pre-training approaches, cross-modal retrieval, e.g., image-text retrieval, remains a challenging task.", "To bridge the semantic gap between the two modalities, previous studies mainly focus on word-region alignment at the object level, lacking the matching between the linguistic relation among the words and the visual relation among the regions.", "The neglect of such relation consistency impairs the contextualized representation of image-text pairs and hinders the model performance and the interpretability.", "In this paper, we first propose a novel metric, I ntra-modal S elf-attention D istance ( ISD ), to quantify the relation consistency by measuring the semantic distance between linguistic and visual relations.", "In response, we present I nter-modal A lignment on I ntra-modal S elf-attentions ( IAIS ), a regularized training method to optimize the ISD and calibrate intra-modal self-attentions from the two modalities mutually via inter-modal alignment.", "The IAIS regularizer boosts the performance of prevailing models on Flickr30k and MS COCO datasets by a considerable margin, which demonstrates the superiority of our approach.", "1 1 Introduction Cross-modal retrieval, including image-text retrieval, video-text retrieval, etc., has long been an important downstream task in cross-modal representation learning.", "Image-Text Retrieval (ITR) aims at modeling the similarity of image-text pairs and recalling the most relevant one.", "It remains quite challenging due to the heterogeneity of the data and the semantic gap between two different modalities.", "To bridge this gap, neural networks are responsible for learning global representations of images Corresponding Author 1 Our code is available at https://github.com/ lancopku/IAIS A guy with a red shirt walking.", "and texts in a joint semantic space and aligning the images and texts with the same semantics (Faghri et al., 2018; Kiros et al., 2014).", "A straightforward way to enhance the alignment is to enforce the local matching between the object-oriented words and the corresponding image regions, and then leverage the object co-occurrence statistics (Liu et al., 2020; Zhang et al., 2020a) in the pairs for inference.", "Previous studies incorporate auxiliary knowledge source like scene graphs (Yu et al., 2020) or object tags (Li et al., 2020) to explicitly indicate the cross-modal mapping.", "Other researches try to establish fine-grained interaction on cross-modal attention to reinforce the focus from words to their most relevant regions, and vice versa (Chen et al., 2020; Wang et al., 2019; Messina et al., 2020; Lee et al., 2018; Zhang et al., 2020b; Yang et al., 2020).", "However, such word-region alignment at object level serves only as the basis because it mainly focuses on the local semantics but lacks the matching of global features like the intra-modal relation .", "The intra-modal relation refers to the correlation of items within a textual or visual sequence.", "More specifically, given a sentence and an image that describe the same scene and are highly matched, the correlation of the items in the textual sequence should also agree with the correlation of the corresponding items in the visual sequence.", "But such constraint of relation consistency is neglected in previous works, which hinders performance and interpretability of the models.", "To corroborate this, we conduct a case study on Flickr30k Entities dataset (Plummer et al., 2015) to probe the agreement of relation-level semantics in pre-trained models like UNITER (Chen et al., 2020).", "We utilize the self-attention distribution as a representation of the intra-modal relations (Clark et al., 2019; Htut et al., 2019; Kovaleva et al., 2019).", "As shown in Figure 1, the attention distributions grouped by the annotated object of the given text and image are in disagreement with each other.", "Specifically, the attention distribution in the linguistic modality is reasonable.", "However, in the visual modality, the region a red shirt pays inappropriate attention to the region of the dog that doesn't appear in the text, which impairs the representation of this visual item, i.e., a red shirt under the condition of the corresponding text.", "Such mismatched attention distributions suggest that the model represents the same concept with inconsistent semantics, which misleads the model to reduce the estimated similarity of the positive pairs and further leads to the wrong predictions that they are unmatched.", "What's even worse is that in practice, the input regions of the existing methods are extracted by a pre-trained object detector like Faster R-CNN (Ren et al., 2015).", "The visual features are much noisier due to over-sampling (Li et al., 2020; Anderson et al., 2018), which necessitates a stronger regularizer to guide the alignment of the intra-modal relations.", "Motivated by the above observations, we promote the semantic alignment from object level to relation level.", "We leverage self-attention matrix to characterize the relation of items within one modality, and design I ntra-modal S elf-attention D istance ( ISD ), a novel metric to measure the consistency between textual and visual relations.", "Our empirical analysis illustrates that the ISD and the model performance on image-text retrieval are highly correlated, which verifies our hypothesis and inspires us to minimize the semantic distance between intra-modal self-attentions in training.", "Accordingly, we propose a new regularized training method called I nter-modal A lignment on I ntra-modal S elf-attentions ( IAIS ) to calibrate two intra-modal attention distributions mutually via inter-modal alignment, which helps learn better contextualized representations for image-text pairs.", "The model performance of image-text retrieval on Flickr30k and MS COCO datasets is improved by a considerable margin with IAIS, which demonstrates the superiority of our proposal.", "In this section, we present a formal definition of intra-modal relation alignment (Section 2.1).", "Such alignment requires extracting the visual and linguistic items corresponding to all objects and sorting them in the same order to make their self-attention distributions comparable.", "We first introduce the mechanism for multimodal attention calculation, and then present the method of attention weight extraction for constructing comparable intra-modal self-attentions (Section 2.2).", "Finally, we propose a metric named Intra-modal Self-attention Distance (ISD) to quantify the relation consistency.", "We conduct an empirical analysis on prevailing models to verify the correlation of the model performance and our metric (Section 2.3).", "Given a sequence O = [ o 1 , , o N ] of N objects appeared in an image-text pair, the linguistic and visual representation of such object sequence can be written as L = [ l 1 , , l N ] and V = [ v 1 , , v N ] , respectively.", "Each item l i , v i with the same index refers to the same object o i .", "2 For every object, its 2 An object o i may require one or more tokens in the text and one or more regions in the image to describe, such that the linguistic item l i and the visual item v i may refer to a collection of tokens and regions, respectively.", "relation to the others is depicted in both the linguistic and the visual modality.", "From a linguistic view, we regard the following textual self-attention distribution as the relation R l i stems from l i : R l i = [ a l i l 1 , , a l i l i , , a l i l N ] , (1) where a l i l j is the attention weight from l i to l j .", "Similarly, the relation R v i from the view of the visual modality can be written as R v i = [ a v i v 1 , , a v i v i , , a v i v N ] .", "Consequently, we can achieve relation-level alignment by narrowing the semantic distance, e.g., Kullback-Leibler Divergence, between the linguistic and visual self-attention distribution for all objects from i = 1 to N :", "In the original self-attention matrix, however, the attention weights of specific objects are scattered and disordered.", "We need to extract the target weights and reorder them to construct comparable attention distributions R l i and R v i .", "In this subsection, we first introduce the vanilla multimodal attention mechanism and then present a specific way of attention weight extraction.", "Consider models of single-stream Transformer-based architecture like UNITER (Chen et al., 2020).", "The model consists of a stack of Transformer layers with attention mechanism (Vaswani et al., 2017) and is responsible for encoding image-text pairs into feature representations.", "Given Q , K , V RN d , the matrix of N query, key and value vectors with dimension d , respectively, the attention function Att( Q , K , V ) is defined as: Att( Q , K , V ) = (cid:16) QK (cid:62) (cid:17) V = ( S ) V .", "Here, is a row-wise, scaled softmax and S is a matrix of attention scores that measure the similarity between every pair of query and key vectors.", "Let L and V denote the linguistic and the visual modality, respectively.", "Given a textual sequence XL of NL tokens and a visual sequence XV of NV regions, the input X = [ XL (cid:107) XV ] in the single-stream architecture is a concatenation of two sequences with length N = NL + NV .", "Accordingly, the query and Two surfersenjoyingthe waves.", "S Figure 2: An example of calculating Intra-modal Self-attention Distance (ISDa) for a matched image-text pair.", "Two inputs in the pair both contain the object of two surfers and the waves .", "For self-attention matrix SLL and SVV from each modality, we extract object-orientated patches according to the annotations and summarize it with the Cps operation (Eq.", "(7)) to synthesize new matrices S ( a ) LL and S ( a ) VV .", "Finally, we use our ISDa metric to measure their semantic distance.", "Q = XWQ = (cid:18) XLXV (cid:19) WQ = (cid:18) QLQV (cid:19) K = XWK = (cid:18) XLXV (cid:19) WK = (cid:18) KLKV (cid:19) , (5)", "where WQ and WK are learnable parameters.", "Furthermore, the attention score matrix S RN N can be organized into four submatrices (Bugliarello et al., 2020): S = QK (cid:62) = (cid:18) QLQV (cid:19) (cid:16) K (cid:62)L K (cid:62)V (cid:17) = (cid:18) QLK (cid:62)L QLK (cid:62)V QVK (cid:62)L QVK (cid:62)V (cid:19) = (cid:18) SLLSLVSVLSVV (cid:19) .", "(6) The matrices SLL and SVV on the diagonal represent the linguistic and the visual intra-modal self-attention, respectively.", "SLV and SVL on back-diagonal represent the inter-modal attention scores from text to image, and the opposite.", "We regard the self-attention ( SLL ) and ( SVV ) as depictions of the intra-modal relations.", "Each row of the matrix represents the relation stemming from one linguistic or visual item to the others within the same modality.", "To construct the comparable intra-modal self-attention matrices, we leverage the object annotations in the Flickr30k Entities dataset (Plummer 3 The value matrix V is omitted for brevity. et al., 2015) to extract the tokens, regions, and attention weights with respect to the target objects.", "As shown in Figure 2, the text and the image both contain annotated objects of two surfers and the waves.", "The linguistic object sequence can be written as L = [ l 1 , l 2 ] = [ two surfers , the waves ] .", "These two objects derive four intrinsic relations and can be described by four patches in the original linguistic self-attention matrix SLL .", "For clarity, we define an operation Ext( S , o i , o j ) that extracts the patch of attention scores in matrix S from the object o i to o j .", "Accordingly, the relation from two surfers to the waves can be denoted as Ext ( SLL , l 1 , l 2 ) .", "To describe the relation with a single value instead of a sub-matrix, we further construct an operation Cps ( ) to summarize the attention patch S RM N to a scalar via column-wise sum and row-wise average: Cps ( S ) = (cid:16)(cid:88) M i (cid:88) N j S ij (cid:17) /M.", "After the above processing, we complete the extraction of the linguistic self-attention SLL through grouping the items by annotated object.", "The extraction of visual self-attention SVV is similar and the final results are denoted as S ( a ) LL and S ( a ) VV .", "As our processing for two intra-modal self-attentions follows the same order of object annotations, the matrices S ( a ) LL and S ( a ) VV from two modalities are of the same dimension and comparable.", "Given two comparable matrices S ( a ) LL and S ( a ) VV , we propose a metric called Intra-modal Self-attention Distance with annotation (ISDa) to quantify their semantic gap at the relation level.", "We define the following symmetric matrix-based Kullback-Leibler Divergence ( m-KL ) for measuring the distance between two matrices A and B : m-KL( A , B ) = (cid:88) N i KL ( A i (cid:107) B i ) + KL ( B i (cid:107) A i ) , (8) where ( ) i stands for the i th row-vector in the matrix and KL denotes the Kullback-Leibler Divergence.", "Accordingly, the final ISDa metric for S ( a ) LL and S ( a ) VV is defined as: ISDa = m-KL (cid:16) S ( a ) LL , S ( a ) VV (cid:17) .", "We present our algorithm for the calculation of ISDa in Algorithm", "1. Algorithm 1: Intra-modal Self-attention Distance with Annotation (ISDa) Input: Intra-modal self-attention matrices SLL , SVV Input: Linguistic object sequence L Input: Visual object sequence V for linguistic object l i in L do for linguistic object l j in L do S l i l j Ext ( SLL , l i , l j ) S ( a ) LL [ i, j ] Cps (cid:0) S l i l j (cid:1) for visual object v i in V do for visual object v j in V do S v i v j Ext ( SVV , v i , v j ) S ( a ) VV [ i, j ] Cps (cid:0) S v i v j (cid:1) ISDa = m-KL (cid:16) S ( a ) LL , S ( a ) VV (cid:17) // Eq.9 return ISDa 0.188 0.198 0.208 0.218 0.228 0.238 0.248 0.258 0.268 0.278 375 385 395 405 415 425 435 0 500 1000 1500 2000 ISD a M e t a -S u m o f R e c a ll Training steps Meta-Sum of Recall ISDa Figure 3: The ISDa (blue ) and model performance (Meta-Sum of Recall, orange ) with respect to the training steps.", "To study the correlation between the ISDa metric and the model performance, 4 we conduct an empirical analysis on UNITER (Chen et al., 2020).", "As shown in Figure 3, the ISDa decreases during the training phase while the model performance continues to increase.", "They are strongly correlated with a Pearson's correlation coefficient of -0.60.", "After the middle stage of training, the curve of the model performance and ISDa tends to be flat, suggesting that merely optimizing the task-oriented loss function while neglecting the constraint of relation consistency hinders the model from achieving better performance.", "To eliminate the bottleneck, we can minimize the ISD in the training phase as a regularization to induce further improvement for the ITR task and better the model interpretability.", "4 We use the Meta-Sum (Chen et al., 2020), sum of Re-call@1, Recall@5, Recall@10 across the image and text retrieval as a metric for model performance.", "In this section, we propose a new regularized training method, Inter-modal Alignment on Intra-modal Self-attentions (IAIS), for image-text retrieval.", "Our goal is to enhance the semantic alignment of relations by minimizing the distance between two intra-modal self-attentions (ISD).", "In practice, given the original visual and linguistic input sequence V = [ v 1 , , v NV ] , L = [ l 1 , , l NL ] with the scattered items, 5 there are no object annotations and the region features extracted by Faster R-CNN are much noisier (Li et al., 2020; Anderson et al., 2018), which results in difficulty in grouping the attention weights by ground-truth object.", "The ISDa thus cannot be used directly as the objective function to minimize.", "To tackle this problem, we regard the input sequence from one modality (e.g., the visual sequence V ) as an anchor.", "For every item in the anchor sequence, we extract its corresponding representation from the other modality (e.g., one item or a collection of items in the linguistic sequence L ) to reconstruct a mirrored sequence.", "After that, the items and their relations within the anchor sequence have a one-to-one correspondence with the items and relations within the mirrored sequence, which makes the intra-modal self-attentions derived from the two sequences comparable.", "In the next two subsections, we propose two methods, singular alignment and distributed alignment , to accomplish the attention extraction and reconstruction.", "linguistic and visual attention weight, while the latter establishes a distributed mapping.", "Besides, we design two losses L ( s ) IAIS and L ( d ) IAIS as a surrogate of the ISDa to measure the semantic distance between intra-modal self-attention matrices.", "Finally, we incorporate the surrogate loss minimization as a regularization to calibrate intra-modal self-attentions mutually and achieve the relation-level alignment.", "For every item in the anchor sequence, singular alignment utilizes the inter-modal attention to find its most relevant item from the opposite modality.", "As the inter-modal attention score quantifies the similarity between the items from two modalities, the visual and the linguistic item with the highest score can be aligned with each other.", "For example, given the i th visual item v i and the inter-modal attention matrix SVL , the similarities between v i and all the linguistic items are depicted in SVL [ i, :] , i.e., the i th row of the matrix.", "Hence the most relevant linguistic item for v i can be denoted as l i , where i = arg max SVL [ i, :] .", "Accordingly, for every weight a v i v j in the original visual self-attention matrix SVV , its corresponding weight a l i l j in the linguistic self-attention matrix SLL can be extracted by the following operation: 6 a l i l j = Ext ( SLL , l i , l j ) , i = arg max SVL [ i, :] , j = arg max SVL [ j, :] , (10) as a singular alignment.", "After all the extractions, we reconstruct a mirrored matrix S ( s ) VV such that S ( s ) VV [ i, j ] = a l i l j , which can be regarded as a 6 Compared with Section 2.2, the Ext operation here extracts a singular attention weight instead of a patch.", "SVV from the linguistic view.", "The surrogate loss of ISDa between SVV and S ( s ) VV is denoted as L ( s ) IAIS-V when taking vision as the anchor modality.", "The similar processing can also be performed when the linguistic sequence is the anchor.", "We can generate the matrix S ( s ) LL as a visual representation of the linguistic self-attention SLL and define a corresponding loss L ( s ) IAIS-L .", "The detailed processing of singular alignment is illustrated in Algorithm 2 and Figure 4.", "The singular version of IAIS loss is defined as: L ( s ) IAIS = L ( s ) IAIS-V + L ( s ) IAIS-L = m-KL (cid:16) ( SVV ) , ( S ( s ) VV ) (cid:17) + m-KL (cid:16) ( SLL ) , ( S ( s ) LL ) (cid:17) .", "(11) 3.2 Distributed Alignment As singular items from different modalities may not be able to give a full representation for each other, we further propose distributed alignment, which utilizes a collection of linguistic items as a representation of a visual item, and vice versa.", "Specifi-cally, given two visual items v i and v j , we regard the inter-modal attentions ( SVL [ i, :]) 7 from v i to all linguistic items and ( SLV [: , j ]) 8 from all linguistic items to v j as a kind of features.", "Hence the original similarity SVV [ i, j ] = a v i v j between v i and v j can also be modeled as a dot-product of their distributed attention features from the cross-modal view: ( SVL [ i, :]) ( SLV [: , j ]) .", "Such distributed 7 The i th row of SVL .", "alignment leverages the language as a bridge to draw implicit connections within the visual modality, which can be intuitively regarded as the back-translation (Sennrich et al., 2016) for multimodal.", "As shown in Figure 4, the distributed version of mirrored self-attention matrix can be constructed by a matrix multiplication of two inter-modal attention matrices: S ( d ) VV = ( SVL ) ( SLV ) , S ( d ) LL = ( SLV ) ( SVL ) .", "Similar to the version of singular alignment, the distributed IAIS loss can be written as: L ( d ) IAIS = L ( d ) IAIS-V + L ( d ) IAIS-L = m-KL (cid:16) ( SVV ) , S ( d ) VV (cid:17) + m-KL (cid:16) ( SLL ) , S ( d ) LL (cid:17) .", "(13) 3.3 Relation Alignment as Regularizer With the IAIS loss, the surrogate of semantic distance between two intra-modal self-attentions, we present a new regularized training method to enhance the relation alignment for image-text retrieval.", "Our final loss is two-fold.", "The first is the task-orientated margin loss: L margin = (cid:88) N p i =1 (cid:104)(cid:88) N n j =1 S j S i + (cid:105) + , (14) where [ x ] + = max(0 , x ) and is a preset margin.", "N p and N n denote the number of positive and negative pairs.", "S i and S j are the similarity scores of a positive and negative image-text pair, respectively.", "The second is the IAIS loss for all positive pairs that quantifies their relation distance.", "The IAIS loss is computed based on the attentions from the last Transformer-layer, and it can be either the singular alignment version (Eq.", "(11)) or the distributed alignment version (Eq.", "(13)).", "To summarize, our final final loss can be formalized as: L = L margin + t LIAIS , (15) where t is a hyper-parameter w.r.t training steps t to balance two loss items.", "Since our relation-level alignment is based on mappings between linguistic and visual items, it is beneficial to focus on the item-level alignment at the previous training stage via the task-orientated loss.", "Accordingly, we utilize Training Signal Annealing (Xie et al., 2020) to gradually incorporate the signal of the IAIS loss and design the following exponential schedule: t = exp (( t / T 1) 5) .", "Here T is the total training steps during fine-tuning phase and t is the current step.", "As a pluggable regularizer, our IAIS method does NOT incorporate any extra parameters and additional data collection yet empowers the models to capture the higher-level semantics of relation consistency efficiently.", "We conduct experiments on the Flickr30k (Young et al., 2014) and MS COCO (Lin et al., 2014) datasets.", "Flickr30K contains 31K images collected from the Flickr website, with five textual descriptions per image.", "We follow Karpathy and Li (2015) to split the data into 30K/1K/1K train-ing/validation/test splits.", "MS COCO consists of 123K images, each accompanied with five human-written captions.", "Following Karpathy and Li (2015), the data is divided into 82K/5K/5K train-ing/validation/test images.", "Due to the limitation of computing resource, we only incorporate IAIS regularization in the phase of fine-tuning instead of pre-training.", "We use the base (12 layers) and the large (24 layers) version of UNITER (Chen et al., 2020), one of the most prevailing large-scale pre-trained models, as our baseline and backbone for IAIS.", "We follow the fine-tuning setting and hyper-parameter configura-tion of the original paper.", "9 The margin in Eq.", "(14) is 0.2.", "For each positive instance, 31 hard negative instances are sampled on the text and image side, respectively, and as each batch contains 8 different 9 https://github.com/ChenRocks/UNITER positive instances, the batch size is 512.", "The learning rate is 5e-5 and the training steps are 5000 for both base and large models.", "All experiments are run on 8 NVIDIA V100 GPUs.", "The main results of the UNITER performance with and without our IAIS regularization are reported in Table", "1. Our methods of both singular and distributed version surpass the baseline by a considerable margin.", "The average improvement over all datasets and models is 4.49 .", "There are also some interesting findings: (1) Compared with image retrieval, the model performance on text retrieval is boosted by IAIS more remarkably with an average improvement of 3.50.", "Note that each image in both datasets is paired with five ground-truth sentences, and our IAIS regularizer helps the model capture the common relations for the image and the corresponding texts so that more ground-truth texts can be successfully retrieved.", "(2) The improvement on UNITER-base is 17.2% higher than that on UNITER-large.", "A consistent result can be found in Table 2, which demonstrates various relation distance metrics of fine-tuned models.", "The ISDa of UNITER-large is smaller than that of UNITER-base, indicating UNITER-large learns more about the relation consistency due to its large capability while there is still room to improve the relation alignment with our IAIS method.", "(3) The relative improvement brought by the singular version of IAIS is 7.0%, higher than that of the distributed version.", "The ISDa and L ( s ) IAIS are correlated with a Pearson's cor-Model ISDa L ( s ) IAISL ( d ) IAIS UNITER-base 0.26 0.59 0.36 + IAIS-singular 0.18 1.31e-3 2.58e-3 + IAIS-distributed 0.17 2.80e-3 2.72e-3 UNITER-large 0.23 0.40 0.16 + IAIS-singular 0.18 2.27e-3 3.22e-3 + IAIS-distributed 0.18 3.15e-3 3.70e-3 Table 2: Different relation distance metrics of each model after fine-tuning.", "relation coefficient of 0.779, which is also higher compared to L ( d ) IAIS with 0.774.", "Besides, our empirical analysis in Figure 5 shows that it is slightly easier to optimize the L ( s ) IAIS , indicating it is a better surrogate of ISDa.", "In Section 3.3, we leverage both the linguistic and the visual input as the anchor sequence to reconstruct the mirrored sequence from the opposite modalities.", "To study the impact of the anchor modality, we conduct an ablation study and the results are listed in Table 3.", "Compared to using language as the anchor modality, i.e., only LIAIS-L is incorporated, the overall model performance is 2.1 higher when vision is taken as the anchor.", "An explanation is that the description capability of visual regions is more concrete and powerful.", "However, introducing both LIAIS-V + LIAIS-L to the final loss can achieve a further improvement of 2.22, which indicates the necessity of such combination.", "Besides the exp schedule in Eq.", "(16) for training signal annealing, we also try other schedules: log schedule: t = 1 exp ( t / T ) ; linear schedule: t = t / T ; exp schedule: t = exp (( t / T 1) ) , where is chosen from { 5 , 10 } .", "All the schedules are shown in Figure 6.", "We compare the results of five schedules for IAIS signal annealing.", "The results in Figure 8 show that the exp schedule with scale = 5 achieves the best performance.", "We also apply IAIS on different layers of UNITER-base.", "As illustrated in Figure 9, the optimal way is to apply IAIS on the last layer.", "We speculate that it is more important to learn relation alignment in the deeper layers because the attention in the deeper layers has a bigger impact on the final output, while the effect of the attention in shallow layers might fade away due to the normalization.", "We further discuss the advantage of our proposed relation-level alignment.", "Figure 7 shows two visualization examples of the intra-modal self-attentions from the Flickr30k Entities dataset.", "With IAIS regularization, the model is instructed to concentrate on the common relations within the linguistic and visual sequence, yielding more calibrated and consistent self-attention distributions.", "In this section, we introduce the task of image-text retrieval and review the representative studies of", "Image-Text Retrieval Image-Text Retrieval (ITR, Barnard et al., 2003; Barnard and Forsyth, 2001), also known as Image-Text Matching, is one of the popular and challenging Language-and-Vision (V+L) tasks.", "Given image-text pairs, the prevailing approaches project them into a joint representation space, on which cosine or dot-product similarities are defined, and recall the most relevant one according to the similarity.", "Multimodal Pre-trained Models The development of the transformer-based large-scale pretraining paradigm sweeps across the area of multimodal learning and achieves many state-of-the-art results on V+L tasks like Image Captioning, Visual Question Answering, Visual Commonsense Reasoning, etc.", "Recent prevailing multimodal pre-trained models can be categorized into single-stream (Chen et al., 2020; Gan et al., 2020; Lin et al., 2020; Li et al., 2020; Su et al., 2020; Lin et al., 2021) and two-stream (Yu et al., 2020; Tan and Bansal, 2019; Lu et al., 2019) models.", "Given a piece of text and an image, the former architecture concatenates the features of tokens and regions and learns their joint representations with one transformer model, while the latter embeds the textual and the visual input separately with two indepen-dent intra-modal transformers and then utilizes an w/o IAIS Layer-1 Layer-6 Layer-12 Layer to apply IAIS 543 544 545 546 547 M e t a -S u m o f R e c a ll 542.76 543.6 545.68 547.02 Figure 9: Comparison of the different layers to apply IAIS on the Flickr30k dataset.", "inter-modal transformer to reinforce cross-modal interactions via cross-modal attention modules.", "In this paper, we promote the semantic alignment for cross-modal retrieval from the object level to the relation level.", "We propose a surrogate metric to quantify the relation consistency by measuring the semantic distance between linguistic and visual relations.", "Furthermore, we present a regularized training method IAIS to calibrate intra-modal self-attentions mutually by minimizing the ISD metric.", "Our method improves both the performance and the interpretability of large-scale pre-trained models.", "Note that, without object annotation in practice, the singular and distributed version of the IAIS loss only provides a coarse-grained attention distribution alignment.", "We leave the elaborate design of ISDa proxy function for future work.", "This work is partly supported by Beijing Academy of Artificial Intelligence (BAAI).", "We thank all the anonymous reviewers for their constructive comments, and Xuancheng Ren and Lei Li for their helpful suggestions in preparing the manuscript." ]
[ "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "objective", "objective", "method", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "other", "other" ]
[ "In this work, we focus on the task of generating natural language descriptions from a structured table of facts containing fields (such as nationality, occupation, etc ) and values (such as Indian, { actor, director } , etc ).", "One simple choice is to treat the table as a sequence of fields and values and then use a standard seq2seq model for this task.", "However, such a model is too generic and does not exploit task-specific characteristics.", "For example, while generating descriptions from a table, a human would attend to information at two levels:", "(i) the fields (macro level) and", "(ii) the values within the field (micro level).", "Further, a human would continue attending to a field for a few timesteps till all the information from that field has been rendered and then never return back to this field (because there is nothing left to say about it).", "To capture this behavior we use", "(i) a fused bifocal attention mechanism which exploits and combines this micro and macro level information and", "(ii) a gated orthogonalization mechanism which tries to ensure that a field is remembered for a few time steps and then forgotten.", "We experiment with a recently released dataset which contains fact tables about people and their corresponding one line biographical descriptions in English.", "In addition, we also introduce two similar datasets for French and German.", "Our experiments show that the proposed model gives 21 % relative improvement over a recently proposed state of the art method and 10 % relative improvement over basic seq2seq models.", "The code and the datasets developed as a part of this work are publicly available.", "1 * The first three authors have contributed equally to this work.", "Rendering natural language descriptions from structured data is required in a wide variety of commercial applications such as generating descriptions of products, hotels, furniture, etc", "., from a corresponding table of facts about the entity.", "Such a table typically contains { field, value } pairs where the field is a property of the entity ( e.g. , color ) and the value is a set of possible assignments to this property ( e.g. , color = red ).", "Another example of this is the recently introduced task of generating one line biography descriptions from a given Wikipedia infobox (Lebret et al., 2016).", "The Wikipedia infobox serves as a table of facts about a person and the first sentence from the corresponding article serves as a one line description of the person.", "Figure 1 illustrates an example input infobox which contains fields such as Born, Residence, Nationality, Fields, Institutions and Alma Mater.", "Each field further contains some words ( e.g. , particle physics, many-body theory, etc .).", "The corresponding description is coherent with the information contained in the infobox.", "Note that the number of fields in the infobox and the ordering of the fields within the infobox varies from person to person.", "Given the large size (700K examples) and heterogeneous nature of the dataset which contains biographies of people from different backgrounds (sports, politics, arts, etc. ), it is hard to come up with simple rule-based templates for generating natural language descriptions from infoboxes, thereby making a case for data-driven models.", "Based on the recent success of data-driven neural models for various other NLG tasks (Bahdanau et al., 2014; Rush et al., 2015; Yao et al., 2015; Chopra et al., 2016; Nema et al., 2017), one simple choice is to treat the infobox as 1539 Figure 1 : Sample Infobox with description : V. Balakrishnan (born 1943 as Venkataraman Balakrishnan) is an Indian theoretical physicist who has worked in a number of fields of areas, including particle physics, many-body theory, the mechanical behavior of solids, dynamical systems, stochastic processes, and quantum dynamics.", "a sequence of { field, value } pairs and use a standard seq2seq model for this task.", "However, such a model is too generic and does not exploit the specific characteristics of this task as explained below.", "First, note that while generating such descriptions from structured data, a human keeps track of information at two levels.", "Specifically, at a macro level, she would first decide which field to mention next and then at a micro level decide which of the values in the field needs to be mentioned next.", "For example, she first decides that at the current step, the field occupation needs attention and then decides which is the next appropriate occupation to attend to from the set of occupations ( actor, director, producer, etc. ).", "To enable this, we use a bifocal attention mechanism which computes an attention over fields at a macro level and over values at a micro level.", "We then fuse these attention weights such that the attention weight for a field also influences the attention over the values within it.", "Finally, we feed a fused context vector to the decoder which contains both field level and word level information.", "Note that such two-level attention mechanisms (Nallapati et al., 2016; Yang et al., 2016; Serban et al., 2016) have been used in the context of unstructured data (as opposed to structured data in our case), where at a macro level one needs to pay attention to sentences and at a micro level to words in the sentences.", "Next, we observe that while rendering the output, once the model pays attention to a field (say, occupation) it needs to stay on this field for a few timesteps (till all the occupations are produced in the output).", "We refer to this as the stay on behavior.", "Further, we note that once the tokens of a field are referred to, they are usually not referred to later.", "For example, once all the occupations have been listed in the output we will never visit the occupation field again because there is nothing left to say about it.", "We refer to this as the never look back behavior.", "To model the stay on behaviour, we introduce a forget (or remember) gate which acts as a signal to decide when to forget the current field (or equivalently to decide till when to remember the current field).", "To model the never look back behaviour we introduce a gated orthogonalization mechanism which ensures that once a field is forgotten, subsequent field context vectors fed to the decoder are orthogonal to (or different from) the previous field context vectors.", "We experiment with the WIKIBIO dataset (Le-bret et al., 2016) which contains around 700K { infobox, description } pairs and has a vocabulary of around 400K words.", "We show that the proposed model gives a relative improvement of 21 % and 20 % as compared to current state of the art models (Lebret et al., 2016; Mei et al., 2016) on this dataset.", "The proposed model also gives a relative improvement of 10 % as compared to the basic seq2seq model.", "Further, we introduce new datasets for French and German on the same lines as the English WIKIBIO dataset.", "Even on these two datasets, our model outperforms the state of the art methods mentioned above.", "Natural Language Generation has always been of interest to the research community and has received a lot of attention in the past.", "The approaches for NLG range from", "(i) rule based approaches ( e.g. , (Dale et al., 2003; Reiter et al., 2005; Green, 2006; Galanis and Androutsopou-los, 2007; Turner et al., 2010))", "(ii) modular statistical approaches which divide the process into three phases (planning, selection and surface realization) and use data driven approaches for one or more of these phases (Barzilay and Lapata, 2005; Belz, 2008; Angeli et al., 2010; Kim and Mooney, 2010; Konstas and Lapata, 2013)", "(iii) hybrid approaches which rely on a combination of handcrafted rules and corpus statistics (Langkilde and Knight, 1998; Soricut and Marcu, 2006; Mairesse and Walker, 2011) and", "(iv) the more recent neural network based models (Bahdanau et al., 2014).", "Neural models for NLG have been proposed in the context of various tasks such as machine translation (Bahdanau et al., 2014), document summarization (Rush et al., 2015; Chopra et al., 2016), paraphrase generation (Prakash et al., 2016), image captioning (Xu et al., 2015), video summarization (Venugopalan et al., 2014), query based document summarization (Nema et al., 2017) and so on.", "Most of these models are data hungry and are trained on large amounts of data.", "On the other hand, NLG from structured data has largely been studied in the context of small datasets such as WEATHERGOV (Liang et al., 2009), ROBOCUP (Chen and Mooney, 2008), NFL RECAPS (Barzi-lay and Lapata, 2005), PRODIGY-METEO (Belz and Kow, 2009) and TUNA Challenge (Gatt and Belz, 2010).", "Recently Mei et al. (2016) proposed RNN/LSTM based neural encoder-decoder models with attention for WEATHERGOV and ROBOCUP datasets.", "Unlike the datasets mentioned above, the biography dataset introduced by Lebret et al. (2016) is larger (700K { table, descriptions } pairs) and has a much larger vocabulary (400K words as opposed to around 350 or fewer words in the above datasets).", "Further, unlike the feed-forward neural network based model proposed by (Lebret et al., 2016) we use a sequence to sequence model and introduce components to address the peculiar characteristics of the task.", "Specifically, we introduce neural components to address the need for attention at two levels and to address the stay on and never look back behaviour required by the decoder.", "Kiddon et al. (2016) have explored the use of checklists to track previously visited ingredients while generating recipes from ingredients.", "Note that two-level attention mechanisms have also been used in the context of summarization (Nallapati et al., 2016), document classifica-tion (Yang et al., 2016), dialog systems (Serban et al., 2016), etc .", "However, these works deal with unstructured data (sentences at the higher level and words at a lower level) as opposed to structured data in our case.", "As input we are given an infobox I = { ( g i , k i ) } Mi =1 , which is a set of pairs ( g i , k i ) where g i corresponds to field names and k i is the sequence of corresponding values and M is the total number of fields in I .", "For example, ( g = occupation , k = actor, writer, director ) could be one such pair in this set.", "Given such an input, the task is to generate a description y = y 1 , y 2 , . . . , y m containing m words.", "A simple solution is to treat the infobox as a sequence of fields followed by the values corresponding to the field in the order of their appearance in the infobox.", "For example, the infobox could be flattened to produce the following input sequence (the words in bold are field names which act as delimiters) [Name] John Doe [Birth Date] 19 March 1981 [Nationality] Indian", "..... The problem can then be cast as a seq2seq generation problem and can be modeled using a standard neural architecture comprising of three components", "(i) an input encoder (using GRU/LSTM cells),", "(ii) an attention mechanism to attend to important values in the input sequence at each time step and", "(iii) a decoder to decode the output one word at a time (again, using GRU/LSTM cells).", "However, this standard model is too generic and does not exploit the specific characteristics of this task.", "We propose additional components, viz.", ",", "(i) a fused bifocal attention mechanism which operates on fields (macro) and values (micro) and", "(ii) a gated orthogonalization mechanism to model stay on and never look back behavior.", "Intuitively, when a human writes a description from a table she keeps track of information at two levels.", "At the macro level, it is important to decide which is the appropriate field to attend to next and at a micro level ( i.e. , within a field) it is important to know which values to attend to next.", "To capture this behavior, we use a bifocal attention mechanism as described below.", "Macro Attention: Consider the i -th field g i which has values k i = ( w 1 , w 2 , ..., w p ) .", "Let h gi be the representation of this field in the infobox.", "This representation can either be", "(i) the word embedding of the field name or", "(ii) some function f of the values in the field or", "(iii) a concatenation of", "(i) and", "(ii).", "The function f could simply be the sum or average of the embeddings of the values in the field.", "Alternately, this function could be a GRU (or LSTM) which treats these values within a field as a sequence and computes the field representation as the final representation of this sequence ( i.e. , the representation of the last time-step).", "We found that bidirectional GRU is a bet-1541 Figure 2 : Proposed model ter choice for f and concatenating the embedding of the field name with this GRU representation works best.", "Further, using a bidirectional GRU cell to take contextual information from neighboring fields also helps (these are the orange colored cells in the top-left block in Figure 2 with macro attention).", "Given these representations { h gi } Mi =1 for all the M fields we compute an attention over the fields (macro level).", "b gt,i = v Tg tanh( U g s t 1 + V g h gi ) t,i = exp ( b gt,i ) P Ml =1 exp ( b gt,l ) c gt = MX i =1 t,i h gi (1) where s t 1 is the state of the decoder at time step t 1 .", "U g , V g and v g are parameters, M is the total number of fields in the input, c gt is the macro (field level) context vector at the t -th time step of the decoder.", "Micro Attention: Let h wj be the representation of the j -th value in a given field.", "This representation could again either be", "(i) simply the embedding of this value", "(ii) or a contextual representation computed using a function f which also considers the other values in the field.", "For example, if ( w 1 , w 2 , ..., w p ) are the values in a field then these values can be treated as a sequence and the representation of the j -th value can be computed using a bidirectional GRU over this sequence.", "Once again, we found that using a bi-GRU works better then simply using the embedding of the value.", "Once we have such a representation computed for all values across all the fields, we compute the attention over these values (micro level) as shown below : a wt,j = v Tw tanh( U w s t 1 + V w h wj ) (2) wt,j = exp ( a wt,j ) P Wl =1 exp ( a wt,l ) (3) where s t 1 is the state of the decoder at time step t 1 .", "U w , V w and v w are parameters, W is the total number of values across all the fields.", "Fused Attention: Intuitively, the attention weights assigned to a field should have an influence on all the values belonging to the particular field.", "To ensure this, we reweigh the micro level attention weights based on the corresponding macro level attention weights.", "In other words, we fuse the attention weights at the two levels as: 0 t,j = t,j t,F ( j ) P Wl =1 t,l t,F ( l ) (4) c wt = WX j =1 0 t,j h wj (5) where F ( j ) is the field corresponding to the j -th value, c wt is the macro level context vector.", "We now describe a series of choices made to model stay-on and never look back behavior.", "We first begin with the stay-on property which essentially implies that if we have paid attention to the field i at timestep t then we are likely to pay attention to the same field for a few more time steps.", "For example, if we are focusing on the occupation field at this timestep then we are likely to focus on 1542 it for the next few timesteps till all relevant values in this field have been included in the generated description.", "In other words, we want to remember the field context vector c gt for a few timesteps.", "One way of ensuring this is to use a remember (or forget) gate as given below which remembers the previous context vector when required and forgets it when it is time to move on from that field.", "(cid:12) (cid:12) where W tf , W gf , b f are parameters to be learned.", "The job of the forget gate is to ensure that c t is similar to c t 1 when required (i.e., by learning f t 1 when we want to continue focusing on the same field) and different when it is time to move on (by learning that f t 0 ).", "Next, the never look back property implies that once we have moved away from a field we are unlikely to pay attention to it again.", "For example, once we have rendered all the occupations in the generated description there is no need to return back to the occupation field.", "In other words, once we have moved on ( f t 0 ), we want the successive field context vectors c gt to be very different from the previous field vectors c t 1 .", "One way of ensuring this is to orthogonalize successive field vectors using c gt = c gt t (cid:12) < c t 1 , c gt > < c t 1 , c t 1 >c t 1 (8) where < a, b > is the dot product between vectors a and b .", "The above equation essentially subtracts the component of c gt along c t 1 .", "t is a learned parameter which controls the degree of orthogonalization thereby allowing a soft orthogonalization ( i.e., the entire component along c t 1 is not subtracted but only a fraction of it).", "The above equation only ensures that c g t is soft-orthogonal to c t 1 .", "Alternately, we could pass the sequence of context vectors, c 1 , c 2 , ..., c t generated so far through a GRU cell.", "The state of this GRU cell at each time step would thus be aware of the history of the field vectors till that timestep.", "Now instead of orthogonalizing c gt to c t 1 we could orthogonalize c gt to the hidden state of this GRU at time-step t 1 .", "In practice, we found this to work better as it accounts for all the field vectors in the history instead of only the previous field vector.", "In summary, Equation 7 provides a mechanism for remembering the current field vector when appropriate (thus capturing stay-on behavior) using a remember gate.", "On the other hand, Equation 8 explicitly ensures that the field vector is very different (soft-orthogonal) from the previous field vectors once it is time to move on (thus capturing never look back behavior).", "The value of c gt computed in Equation 8 is then used in Equation 7.", "The c t (macro) thus obtained is then concatenated with c wt (micro) and fed to the decoder (see Fig. 2) 4 Experimental setup We now describe our experimental setup: 4.1 Datasets We use the WIKIBIO dataset introduced by Lebret et al. (2016).", "It consists of 728 , 321 biography articles from English Wikipedia.", "A biography article corresponds to a person (sportsman, politician, historical figure, actor, etc .).", "Each Wikipedia article has an accompanying infobox which serves as the structured input and the task is to generate the first sentence of the article (which typically is a one-line description of the person).", "We used the same train, valid and test sets which were made publicly available by Lebret et al. (2016).", "We also introduce two new biography datasets, one in French and one in German.", "These datasets were created and pre-processed using the same procedure as outlined in Lebret et al. (2016).", "Specifically, we extracted the infoboxes and the first sentence from the corresponding Wikipedia article.", "As with the English dataset, we split the French and German datasets randomly into train ( 80 %), test ( 10 %) and valid ( 10 %).", "The French and German datasets extracted by us has been made publicly available.", "2 The number of examples was 170K and 50K and the vocabulary size was 297K and 143K for French and German respectively.", "Although in this work we focus only on generating descriptions in one language, we hope that this dataset will also be useful for developing models which jointly learn to generate descriptions from structured data in multiple languages.", "We compare with the following models: 1. (Lebret et al., 2016): This is a conditional language model which uses a feed-forward neural network to predict the next word in the description conditioned on local characteristics ( i.e.,", "Table 1 : Comparison of different models on the English WIKIBIO dataset", "( i.e., overall structure of the infobox).", "2. (Mei et al., 2016): This model was proposed in the context of the WEATHERGOV and ROBOCUP datasets which have a much smaller vocabulary.", "They use an improved attention model with additional regularizer terms which influence the weights assigned to the fields.", "3. Basic Seq2Seq: This is the vanilla encode-attend-decode model (Bahdanau et al., 2014).", "Further, to deal with the large vocabulary ( 400K words) we use a copying mechanism as a postprocessing step.", "Specifically, we identify the time steps at which the decoder produces unknown words (denoted by the special symbol UNK).", "For each such time step, we look at the attention weights on the input words and replace the UNK word by that input word which has received maximum attention at this timestep.", "This process is similar to the one described in (Luong et al., 2015).", "Even Lebret et al. (2016) have a copying mechanism tightly integrated with their model.", "We tuned the hyperparameters of all the models using a validation set.", "As mentioned earlier, we used a bidirectional GRU cell as the function f for computing the representation of the fields and the values (see Section 3.1).", "For all the models, we experimented with GRU state sizes of 128 , 256 and 512 .", "The total number of unique words in the corpus is around 400K (this includes the words in the infobox and the descriptions).", "Of these, we retained only the top 20K words in our vocabulary (same as (Lebret et al., 2016)).", "We initialized the embeddings of these words with 300 dimensional Glove embeddings (Pennington et al., 2014).", "We used Adam (Kingma and Ba, 2014) with a learning rate of 0 .", "0004 , 1 = 0 .", "9 and 2 = 0 .", "999 .", "We trained the model for a maximum of 20 epochs and used early stopping with the patience set to 5 epochs.", "We now discuss the results of our experiments.", "Following Lebret et al. (2016), we used BLEU-4, NIST-4 and ROUGE-4 as the evaluation metrics.", "We first make a few observations based on the results on the English dataset (Table 1).", "The basic seq2seq model, as well as the model proposed by Mei et al. (2016), perform better than the model proposed by Lebret et al. (2016).", "Our final model with bifocal attention and gated orthogonalization gives the best performance and does 10 % (relative) better than the closest baseline (ba-sic seq2seq) and 21 % (relative) better than the current state of the art method (Lebret et al., 2016).", "In Table 2, we show some qualitative examples of the output generated by different models.", "To make a qualitative assessment of the generated sentences, we conducted a human study on a sample of 500 Infoboxes which were sampled from English dataset.", "The annotators for this task were undergraduate and graduate students.", "For each of these infoboxes, we generated summaries using the basic seq2seq model and our final model with bifocal attention and gated orthogonalization.", "For each description and for each model, we asked three annotators to rank the output of the systems based on", "i) adequacy ( i.e. does it capture relevant information from the infobox),", "(ii) fluency ( i.e. grammar) and", "(iii) relative preference ( i.e. , which of the two outputs would be preferred).", "Overall the average fluency/adequacy (on a scale of 5 ) for basic seq2seq model was 4 .", "04 / 3 .", "6 and 4 .", "19 / 3 .", "9 for our model respectively.", "The results from Table 3 suggest that in general gated orthogonalization model performs better than the basic seq2seq model.", "Additionally, annotators were asked to verify if the generated summaries look natural ( i.e , as if they were generated by humans).", "In 423 out of 500 cases, the annotators said Yes suggesting that gated orthogonalization model indeed produces good descriptions.", "The results on the French and German datasets are summarized in Tables 4 and 5 respectively.", "Note that the code of (Lebret et al., 2016) is not publicly available, hence we could not report numbers 1544 Reference: Samuel Smiles (23 December 1812 16 April 1904), was a Scottish author and government reformer who campaigned on a Chartist platform.", "Table 2 : Examples of generated descriptions from different models.", "For the last two examples, name generated by Basic Seq2Seq model is incorrect because it attended to preceded by field.", "Table 3 : Qualitative Comparison of Model A (Seq2Seq) and Model B (our model)", "for French and German using their model.", "We observe that our final model gives the best performance though the bifocal attention model performs poorly as compared to the basic seq2seq model on French.", "However, the overall performance for French and German are much smaller than those for English.", "There could be multiple reasons for this.", "First, the amount of training data in these two languages is smaller than that in English.", "Specifically, the amount of training data available in French (German) is only 24 .", "2 ( 7 . 5 )% of that available for English.", "Second, on average the descriptions in French and German are longer than that in English (EN: 26 . 0 words, FR: 36 . 5 words and DE: 32 . 3 words).", "Finally, a manual inspection across the three languages suggests that the English descriptions have a more consistent structure than the French descriptions.", "For example, most English descriptions start with name followed by date of birth but this is not the case in French.", "However, this is only a qualitative observation and it is hard to quantify this characteristic Model BLEU-4 NIST-4 ROUGE-4 (Mei et al., 2016) 10.40 2.51 7.81 Basic Seq2Seq 14.50 3.02 12.22 +Fused bifocal attention 13.80 2.86 12.37 +Gated orthogonalization 15.52 3.30 12.80 Table 4 : Comparison of different models on the French WIKIBIO dataset Model BLEU-4 NIST-4 ROUGE-4 (Mei et al., 2016) 9.30 2.23 5.85 Basic Seq2Seq 17.05 3.09 12.16 +Fused bifocal attention 20.38 3.43 14.89 +Gated orthogonalization 23.33 4.24 16.40 Table 5 : Comparison of different models on the German WIKIBIO dataset of the French and German datasets.", "If the proposed model indeed works well then we should see attention weights that are consistent with the stay on and never look back behavior.", "To verify this, we plotted the attention weights in cases where the model with gated orthogonalization does better than the model with only bifocal attention.", "Figure 3 shows the attention weights corresponding to infobox in Figure 4. Notice that the model without gated orthogonalization has attention on both name field and article title while rendering the name.", "The model with gated orthogonalization, on the other hand, stays on the name 1545", "Figure 3 : Comparison of the attention weights and descriptions produced for Infobox in Figure", "Figure 4 : Wikipedia Infobox for Samuel Smiles", "Figure 5 : Wikipedia Infobox for Mark Tobey", "field for as long as it is required but then moves and never returns to it (as expected).", "Due to lack of space, we do not show similar plots for French and German but we would like to mention that, in general, the differences between the attention weights learned by the model with and without gated orthogonalization were more pronounced for the French/German dataset than the English dataset.", "This is in agreement with the results reported in Table 4 and 5 where the improvements given by gated orthogonalization are more for French/German than for English.", "Table 6 : Out of domain results(BLEU-4)", "What if the model sees a different type of person at test time?", "For example, what if the training data does not contain any sportspersons but at test time we encounter the infobox of a sportsperson.", "This is the same as seeing out-of-domain data at test time.", "Such a situation is quite expected in the products domain where new products with new features (fields) get frequently added to the catalog.", "We were interested in three questions here.", "First, we wanted to see if testing the model on out-of-domain data indeed leads to a drop in the performance.", "For this, we compared the performance of our best model in two scenarios", "(i) trained on data from all domains (including the target domain) and tested on the target domain (sports, arts) and", "(ii) trained on data from all domains except the target domain and tested on the target domain.", "Comparing rows 1 and 2 of Table 6 we observed a significant drop in the performance.", "Note that the numbers for sports domain in row 1 are much better than the Arts domain because roughly 40% of the WIKIBIO training data contains sportspersons.", "Next, we wanted to see if we can use a small 1546", "amount of data from the target domain to fine tune a model trained on the out of domain data.", "We observe that even with very small amounts of target domain data the performance starts improving sig-nificantly (see rows 3 and 4 of Table 6).", "Note that if we train a model from scratch with only limited data from the target domain instead of fine-tuning a model trained on a different source domain then the performance is very poor.", "In particular, training a model from scratch with 10K training instances we get a BLEU score of 16 .", "2 and 28 .", "4 for arts and sports respectively.", "Finally, even though the actual words used for describing a sportsperson (footballer, cricketer, etc. ) would be very different from the words used to describe an artist (actor, musician, etc .) they might share many fields (for example, date of birth, occupation, etc .).", "As seen in Figure 6 (attention weights corresponding to the infobox in Figure 5), the model predicts the attention weights correctly for common fields (such as occupation) but it is unable to use the right vocabulary to describe the occupation (since it has not seen such words frequently in the training data).", "However, once we fine tune the model with limited data from the target domain we see that it picks up the new vocabulary and produces a correct description of the occupation.", "address specific characteristics of the problem we propose neural components for fused bifocal attention and gated orthogonalization to address stay on and never look back behavior while decoding.", "Our final model outperforms an existing state of the art model on a large scale WIKIBIO dataset by 21 %.", "We also introduce datasets for French and German and demonstrate that our model gives state of the art results on these datasets.", "Finally, we perform experiments with an out-of-domain model and show that if such a model is fine-tuned with small amounts of in domain data then it can give an improved performance on the target domain.", "Given the multilingual nature of the new datasets, as future work, we would like to build models which can jointly learn to generate natural language descriptions from structured data in multiple languages.", "One idea is to replace the concepts in the input infobox by Wikidata concept ids which are language agnostic.", "A large amount of input vocabulary could thus be shared across languages thereby facilitating joint learning.", "We thank Google for supporting Preksha Nema through their Google India Ph.D.", "Fellowship program.", "We also thank Microsoft Research India for supporting Shreyas Shetty through their generous travel grant for attending the conference." ]
[ "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "objective", "result", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Example sentences for targeted words in a dictionary play an important role to help readers understand the usage of words.", "Traditionally, example sentences in a dictionary are usually created by linguistics experts, which are labor-intensive and knowledge-intensive.", "In this paper, we introduce the problem of dictionary example sentence generation , aiming to automatically generate dictionary example sentences for targeted words according to the corresponding definitions.", "This task is challenging especially for polysemous words, because the generated sentences need to reflect different usages and meanings of these targeted words.", "Targeted readers may also have different backgrounds and educational levels.", "It is essential to generate example sentences that can be understandable for different backgrounds and levels of audiences.", "To solve these problems, we propose a controllable target-word-aware model for this task.", "Our proposed model can generate reasonable examples for targeted words, even for polysemous words.", "In addition, our model allows users to provide explicit control over attributes related to readability, such as length and lexical complexity, thus generating suitable examples for targeted audiences.", "Automatic and human evaluations on the Oxford dictionary dataset show that our model can generate suitable examples for targeted words with specific definitions while meeting the desired readability.", "A dictionary usually consists of targeted words, part-of-speech (POS) tags, definitions and corresponding example sentences.", "Definitions and their corresponding examples enable audiences to better master new words, understand unfamiliar texts and the usage of the words in typical sentences, where a definition is a simple description for the meaning of the targeted word, and an example shows audiences how to use the word under this definition.", "Both definitions and examples are critical, playing an important role in language acquisition and natural language understanding.", "However, it is often the case that audiences cannot find satisfactory example sentences for rarely used or newly coined words.", "On the other hand, it is time-consuming for experts to create dictionary examples for these words.", "With the advancement of AI technologies, it is a natural direction to study how to generate dictionary examples automatically, to assist dictionary compilation and help humans understand the corresponding targeted words.", "Dictionary example sentence generation aims to generate example sentences for targeted words to reflect their definitions and usages automatically.", "Recently, definition generation (Noraset et al., 2017; Gadetsky et al., 2018; Ishiwatari et al., 2019) has been extensively studied, yet generating example sentences is not well-studied.", "To the best of our knowledge, we are the first group to introduce this challenging problem.", "One main challenge for this task is that targeted words must appear in outputs.", "Another challenge is that polysemous words (e.g. bank' ), which have multiple senses, even multiple POS tags, are ubiquitous.", "Thus, a polysemous word in generated examples should convey the given sense and POS tag.", "Lexically constrained text generation is meant to incorporate some specific keywords into outputs, which has been widely studied.", "Previous lexically constrained models inject the given keywords into outputs either by manipulating the decoding process (Mou et al., 2015; Hokamp and Liu, 2017), or using the keywords as the initial state and refining it with a series of actions, such as insertion and replacement until it is completed (He and Li, 2021).", "It is natural to use these lexically constrained models as baselines, since they have solved the first challenge.", "In response to the second one, we further extend these lexically constrained models by feeding the definition into the encoder and then 610 injecting the targeted word during decoding.", "However, these models have two inherent drawbacks for this task: (1) During inference, these models generate a sentence based on the definition and force the targeted word to appear in outputs.", "However, they fail to explore the correlation between the targeted word and input, thus sacrificing the generation quality to ensure the targeted word appears in outputs.", "(2) These models are computation-intensive, as they need to manipulate the decoding process.", "To circumvent these problems, the proposed model is expected to understand this task so that there is no need to interfere with the decoding process.", "To achieve this goal, we directly feed the targeted word and definition into the model.", "This simple change brings two advantages over lexically constrained generation models: (1) During training, our model fully explores the correlation between targeted words and definitions, and gradually acquires this task.", "As a result, even the proposed model does not control decoding, outputs contain targeted words in 99.6% of cases.", "(2) With the release of control over decoding, our model significantly improves the generation quality and dramatically reduces the inference latency.", "Apart from the above two challenges, the proposed model should generate suitable examples to match the readability levels of different audiences, such as children and college students.", "To address the third challenge, the proposed model is expected to control the readability-related attributes of outputs, namely length and lexical complexity.", "Inspired by Keskar et al. (2019), the proposed model is trained on discrete control tokens, which are related to the length and lexical complexity of gold example sentences.", "By doing so, the proposed model will learn to associate the control tokens with the length and lexical complexity of outputs.", "As a result, we can control the readability of outputs by varying the length and lexical control tokens.", "Our contributions are summarized as follows: (1) We introduce the dictionary example sentence generation task.", "(2) We propose a large dataset for dictionary example generation.", "(3) We propose a controllable target-word-aware model and several baselines for this task 1 .", "(4) We propose two BERT-based classifiers to automatically evaluate whether the target word in the generated example conveys the given sense and POS tag, respectively.", "(5) Our 1 Our dataset and code are available at https://github.com/NLPCode/CDEG.", "experiment results on the Oxford dictionary dataset show that our model outperforms baselines in terms of generation quality, diversity, POS and definition accuracy.", "More importantly, our model can tailor examples to fit the needs of targeted audiences by controlling the length and lexical complexity.", "Dictionary Example Sentence Generation aims to generate a fluent example E = { e 1 , . . . , e T } for the targeted word w under a specific definition D = { d 1 , . . . , d S } , where w should appear in E and convey D .", "During training, this task aims to maximize the conditional probability of E : p ( E | w , D ; ) = T (cid:89) t =1 p ( e t | e i<t , w , D ; ) .", "(1) 3 Methodology 3.1 Motivation Our motivation is to make the model understand dictionary example sentence generation so that we do not need to interfere with the decoding process.", "Intuitively, if the model has mastered the requirements of this task, the model will know reasonable outputs should contain the targeted word under the specific sense when seeing the target word and definition.", "Driven by this motivation, we use an encoder-decoder architecture, initialized with BART (Lewis et al., 2020), where the encoder directly takes the targeted word and definition as inputs.", "During training, the model gradually learns to incorporate the targeted word under the specific meaning into output, otherwise, it will suffer a large cross-entropy loss between the predicted distributions of the decoder and golden examples.", "To gain control over the readability of outputs, the model is also trained on the readability-related control tokens of gold examples.", "In this way, the model will gradually learn to correlate the special token with a readability attribute of outputs, otherwise, it will also suffer a large cross-entropy loss.", "See Section 3.2 for readability-related control tokens.", "The overview of the proposed model is shown in Figure 1.", "The encoder input consists of five parts: the targeted word, POS tag, length, lexical complexity, and definition.", "Each part begins with a special token, indicating the start of this part.", "For example, < Word > means the following content is the targeted word.", "The decoder aims to generate examples based on the encoder inputs.", "To control the readability of outputs, we need to find out which attributes of outputs are related to readability.", "Flesch-Kincaid Grade Level (FKGL) and Flesch Reading-Ease Score (FRES) (Kincaid et al., 1975) are widely used to assess the difficulty of English text.", "Both metrics are related to the average sentence length and assume that the longer the sentence, the more difficult the text is to understand.", "On the other hand, lexical complexity also affects readability (Shardlow, 2014).", "For example, too many complicated words appearing in a text may hinder audiences' understanding of the text.", "Length (Len).", "Len denotes the number of tokens in a tokenized 2 example.", "Figure 2", "(d) shows that example lengths range from 3 to 60.", "Hence, we add 58 learnable Len control tokens to the vocabulary.", "Lexical Complexity (LC).", "Word frequencies are the most reliable predictor of word complexity (Paetzold and Specia, 2016).", "Given this, we use word frequencies as a proxy of LC.", "In the following, we will show how to compute the LC of an example.", "First, we tokenize all examples in the training set with NLTK word tokenizer 3 .", "Next, we rank unique words by word frequencies in descending order.", "Then, we compute the word ranks for all words in one example.", "After that, we calculate the third-quartile of log-ranks and use it as the LC for the example.", "Finally, we discretize all LC values into 40 discrete LC labels.", "LC label distribution is shown in Figure 2", "(e).", "Therefore, we add 40 trainable LC control tokens (0-39) to the vocabulary.", "Since we utilize the BART tokenizer, feeding the original form of a targeted word into the encoder may hinder the decoder from injecting it into the output.", "As shown in Table 1, the token sequence for the targeted word banked' does not appear in the tokenized example (row 1).", "Therefore, the model must learn to map { b , anked } to { Gbank , ed } to include it in outputs, which undoubtedly increases the difficulty of incorporating the word into outputs.", "This problem is caused by the discrepancy between the token sequences of the targeted word and example.", "To solve this, we add an initial space to the targeted word and example so that the tokenized word appears in the tokenized example (row 2).", "By doing so, the decoder can copy the targeted word from the encoder to outputs instead of mapping, thus improving the word coverage by 8.6%.", "During training, we feed the targeted word, golden POS, Len and LC labels of examples into the en-612", "coder, and then fine-tune the model by minimizing the cross-entropy loss.", "During inference, we set Len and LC to fixed values to generate examples with expected Len and LC.", "In text style transfer, Shen et al. (2017), Hu et al. (2017) and Li et al. (2018) used a pre-trained classifier to assess whether outputs have the desired attribute.", "Inspired by this, we propose a definition classifier to evaluate whether the targeted word w in the example E conveys the given meaning D .", "The definition model takes a triple of word, definition and example ( w , D, E ) as input.", "To train the definition model, we first create the synthetic data { ( w , D, E, L ) } .", "If w in E conveys D , the label L is 1, denoting the data instance is positive.", "Otherwise, L is 0, denoting the data instance is negative.", "We directly select the positive data instance ( w , D, E ) from the Oxford training or validation set.", "Then, we create three kinds of negative data instances based on a positive data instance by (1) replacing w with another word in E or vocabulary; (2) replacing D with another definition of w or other words; (3) replacing E with another sentence, which does not contain w .", "For ease of understanding, we show several synthetic data instances in Table 12 in the Appendix.", "We fine-tune BERT-base (Devlin et al., 2019) on the synthetic training set (see Figure 3 for the model input), achieving 89.9% F1 on the validation set.", "Similarly, we train a BERT-based POS classifier to assess whether w in E reflects the given POS tag, which achieves 98.5% F1 on the synthetic validation set.", "We show the statistics of synthetic data for the definition and POS models, and their performance on the validation set in Appendix A and B. 4 Experiments 4.1 Experiment Setups Dataset and Pre-processing.", "We evaluate our proposed model on Oxford Dictionary 4 .", "Gadetsky et al. (2018) released a dataset based on this resource for definition generation.", "However, this dataset is unsuitable for dictionary example generation due to the following limitations (Chang et al., 2018): (1) each definition has only one example sentence; (2) some examples in their dataset do not contain targeted words.", "To solve these problems, we collect a new Oxford dataset by filtering out definitions with the number of examples less than two, and examples not containing the targeted word.", "In addition, we remove targeted words containing letters less than two or greater than 20.", "Each data instance is a quadruplet, containing a targeted word, POS tag, definition and examples of the word usage.", "We split the dataset into training, validation and test sets based on the triplets (lemma, POS, definition), which are mutually exclusive across three sets (see Table 2 for statistics of this dataset).", "Different from the training set, the validation/test set only contains polysemous words with at least two definitions, since it is more challenging to generate examples for polysemy.", "During training, each sense along with all corresponding example sentences will be used to update models.", "During inference, we will generate only one example sentence for each sense, but each lemma in English may have multiple inflections (For example, inflected forms of the verb bank' include banked', banking', etc).", "Given that we use BLEU to evaluate the generation quality, we only keep the word form with most example sentences for each definition tuple (lemma, POS, definition) in the validation/test set.", "For the sense (bank', Verb, heap (a substance) into a mass or mound') in the test set, two examples contain banked' and only one example contains banking', so we keep examples containing banked'.", "The data distributions of the training set are shown in Figure 2.", "See Appendix C for the data distributions of validation and test sets.", "Baselines.", "We first implement two retrieval baselines by randomly selecting examples containing 4 https://en.oxforddictionaries.com/ 613 # Models/Metrics Coverage POSA DefA B-2 B-4 SB-4 D-2 D-4 AveLen Latency Retrieval Models 1 One-Billion-Word 96.4% 82.9% 35.6% 12.5% 1.6% 18.7% 53.4% 76.6% 28.7 9.033 2 Training set 97.3% 84.0% 35.6% 17.3% 6.8% 18.2% 54.3% 77.1% 27.3 0.371 Lexically Constrained Models without Definitions 3 sep-B/F 100.0% 86.1% 32.6% 25.1% 4.7% 44.8% 29.3% 61.0% 18.2 0.964 4 asyn-B/F 100.0% 86.1% 32.3% 24.5% 4.5% 43.0% 30.1% 63.1% 19.6 0.931 5 GBS 100.0% 83.8% 33.4% 17.0% 2.5% 61.6% 23.7% 44.4% 19.9 7.854 6 X-MCMC-C 100.0% 0.1% 7.5% 15.6% 2.3% 15.1% 53.2% 95.0% 12.1 24.23 Lexically Constrained Models with Definitions 7 sep-B/F 100.0% 87.7% 77.1% 27.9% 6.4% 30.0% 43.5% 83.2% 15.5 1.002 8 asyn-B/F 100.0% 89.6% 77.5% 27.8% 6.2% 30.0% 42.9% 83.9% 16.9 0.991 9 GBS 100.0% 91.3% 77.5% 26.3% 6.1% 28.1% 44.6% 84.2% 15.9 8.025 Our Models + Word + POS + Len 14 + LC 25 10 Random (greedy) 99.8% 96.9% 81.8% 28.0% 5.4% 40.5% 34.3% 69.9% 14.0 0.164 11 BART-base (greedy) 99.6% 97.2% 87.7% 28.8% 7.6% 23.9% 49.5% 86.4% 14.0 0.161 12 BART-base (beam 5) 99.5% 97.4% 87.8% 31.4% 9.6% 26.2% 46.8% 84.2% 14.1 0.195 Table 3: Results on the Oxford test set.", "the targeted words from the One-Billion-Word 5 corpus or the training set, respectively.", "We adopt four lexically constrained generation models: two variants of the backward forward model (sep-B/F and asyn-B/F) (Mou et al., 2015), grid beam search (GBS) (Hokamp and Liu, 2017) and X-MCMC-C (He and Li, 2021).", "We implement the former three baselines based on GPT-2 small (117M).", "We train X-MCMC-C with the code provided by He and Li (2021), which is based on XLNet-base (110M).", "These methods generate sentences containing targeted words without considering definitions.", "To remedy this, we re-implement the former three models based on BART-base (139M), where the encoder takes the definition as input and the decoder incorporates the word during inference.", "Implementation Details.", "We initialize our model with BART-base, which has comparable parameters to generation baselines.", "For generation baselines and our models, we use AdamW (Loshchilov and Hutter, 2019) with an initial learning rate of 1 e 5 to update parameters for four epochs and choose the checkpoints with the lowest validation loss.", "During inference, we run beam search decoding with beam width = 5 on generation baselines and our model.", "We also run greedy decoding on our model.", "Following He and Li (2021), we run X-MCMC-C for 200 steps and select the example with the lowest negative log-likelihood (NLL) as output.", "To discourage the generation of repetitive tokens, we apply the repetition penalty strategy Keskar et al. (2019) with the penalized parameter = 1.3 to all models.", "We implement all models with the HuggingFace Transformers library (Wolf et al., 5 http://www.statmt.org/lm-benchmark/ 2019).", "Evaluation Metrics.", "We evaluate the generated examples from four aspects: Q1: Whether the generated example contains the targeted word?", "Q2: Whether the targeted word in the generated example conveys the given sense?", "Q3 & Q4: Whether the outputs are fluent and diverse?", "First, we check whether the targeted word appears in the example, indicated as word Coverage .", "If so, we will further assess whether the targeted word conveys the given POS tag and sense with the BERT-based POS and definition models, called POS Accuracy ( POSA ) and Definition Accuracy ( DefA ).", "As for Q3, it is non-trivial to evaluate the generation quality.", "In this paper, we do not use NLL as a metric for sentence fluency, since lower NLL does not always denote better sentence quality (Holtz-man et al., 2020).", "We use BLEU (Papineni et al., 2002) to measure the n-gram similarity between the generated examples and human references, which is a widely-used automatic metric for generation quality.", "One concern is that BLEU may be not ideal for dictionary example generation, since there may exist many sentences that could be appropriate for a given word and definition.", "To remedy this, each sense (i.e., definition triplet) in the validation and test sets contains an average of 11 examples, which provide a richer and more diverse test-bed for further automatic evaluation.", "To answer Q4, we use Self-BLEU (Zhu et al., 2018) and Distinct n-gram (Li et al., 2016) to measure the generation diversity.", "Self-BLEU-4 ( SB-4 ) is computed by treating one sentence as the hypothesis, and the first 1K generated sentences excluding the hypothesis as 614 references.", "Distinct bigram ( D-2 ) and 4-gram ( D-4 ) indicate the proportions of unique bigrams and 4-grams, respectively.", "Table 3 reports the main experiment results on the test set, from which we can draw four conclusions: (1) Generation models are critical.", "We cannot retrieve examples for all words.", "For example, only 97.3% of the words in the test set appear in the training set.", "We do not see any improvements with a larger dataset (rows 2), yet brings a much higher retrieval latency.", "By comparison, the generation models have the potential to generate examples for unseen words, thus greatly improving coverages.", "(2) The definition is helpful.", "Compared with generation baselines w/o definitions (rows 3-6), their counterparts w/ definitions (rows 7-9) significantly improve DefA to around 77%.", "As we have mentioned before, all words in the test set are polysemous (see Figure 6", "(a)).", "That is why the definition is useful and indispensable for this task.", "(3) The pre-trained model does matter.", "Compared with the random counterpart (row 10), our model initialized with the BART-base model (row 11) can generate more fluent (B-4) and diverse (SB-4, D2, D4) sentences while improving DefA by around 6%.", "That is possibly because BART acquires some syntactic and semantic knowledge during pre-training, which is useful for this task.", "(4) The proposed models outperform other generation baselines in most metrics.", "One problem with lexically constrained generation models (rows 7-9) is that they do not explicitly explore the correlation between the targeted word and input.", "When feeding a definition into these models, they just generate a sentence based on the encoder input and force the targeted word to appear in outputs.", "By interfering with decoding, they can achieve 100% word coverage, yet this is achieved at the cost of generation quality, POSA and DefA.", "Another problem is that their manipulations of decoding cause higher inference latency.", "By comparison, our proposed model directly takes the targeted word as input instead of compulsorily injecting it into outputs during inference.", "This simple change brings two advantages over the lexically constrained generation methods: (1) Our model can fully explore the correlation between the targeted word and the definition, and gradually acquires this task during training.", "As a result, when # Variants Coverage POSA DefA B-4 D-4 1 Full model 99.6% 97.2% 87.7% 7.6% 86.4% 2 Word 14.5% 17.3% 16.1% 3.6% 85.9% 3 POS 99.6% 96.6% 87.6% 7.5% 86.6% 4 Definition 99.4% 97.4% 35.8% 4.3% 73.2% Table 4: Results of ablation study on the test set.", "feeding a targeted word and a definition into the proposed model, it will understand that the reasonable outputs should contain the targeted word under the specific sense.", "That is why even the proposed model does not control the decoding process, it does not sacrifice the word coverage (e.g. 99.6% word coverage in row 11).", "(2) Eliminating interference to decoding brings substantial improvements in generation quality (B-4), POSA, and DefA, and dramatically reduces inference latency.", "We perform an ablation study to demonstrate the importance of each design.", "We first train variants of the full model by removing the word, POS, and definition, and then run greedy decoding on the well-trained models to generate examples.", "We show the results on the test set 6 in Table 4.", "Compared with the full model (row 1), we note that: (1) removing the targeted word significantly decreases the word coverage (row 2).", "(2) POS helps to improve the POSA (row 3).", "(3) the definition improves the DefA (row 4).", "These observations verify the effectiveness of these components.", "Len and LC control tokens are mainly used to control the readability of outputs, which do not degrade the generated examples (see Table 13 in the Appendix).", "We also test the effect of leading space.", "As shown in Table 5, adding the space increases the word coverage by 8.6%, establishing the importance of this design.", "The pointer network (Gul-cehre et al., 2016; See et al., 2017) is used to copy content from the source into outputs.", "However, only using the pointer network cannot improve the word coverage, as it does not solve the mapping issue mentioned in Section 3.3.", "Therefore, we do not use the pointer network.", "Effect of Control Tokens.", "In Table 13, we have shown that Len and LC control tokens affect readability via HF, but two questions are still unclear: Q1: Whether these control tokens have the desired effects on their associated attributes, length and lexical complexity?", "Q2: What is the correlation between LC and readability?", "To answer Q1, we generate examples by running greedy decoding on our model (row 11 of Table 3) with different control tokens.", "From Figure 4", "(a) and", "(b), we see that: (1) the average LC and Len of outputs increase linearly with the gold LC and Len labels; (2) the MSE values between Len and gold Len labels are negligible, while the MSE values between LC and gold LC labels are relatively large, especially when LC > 30 indicating that the control ability of the model on LC decreases, possibly due to the limited training data (see Figure 2", "(e)).", "Therefore, we can conclude that Len and LC control tokens do affect their associated attributes.", "widely used readability metrics, FKGL and FRES.", "Since LC is based on the word frequency, we compute the PCC between LC and the proportion of high-frequency words with a word rank lower than 2K (HF (2K)).", "FKGL and FRES are related to the average number of syllables of outputs (Ave-Syl), so we also compute PCC between LC and AveSyl.", "As shown in Figure 4", "(c), LC values of outputs are strongly positively correlated with the gold/expected LC labels, which again verifies our model's control ability over LC.", "We also notice that PCC between LC and HF is -0.99, proving that LC can control the other readability-related metrics of outputs by controlling HF.", "We show more results of control tokens in Appendix H. Effect of the Number of Senses.", "As shown in the first part of Table 6, with the increase of the number of definitions, it becomes more and more challenging to generate examples satisfying the definition(s), thus causing a decrease in DefA.", "Effect of POS Tags.", "As shown in the second part of Table 6, our model performs worst on the adverb case, especially in POSA and DefA.", "We found that our model may ignore the adverb POS tag and use the adjective POS tag (see the targeted word worse ' in Table 15).", "We presume that there are two possible reasons: (1) in the training set, the adverb training data is far less than the adjective data (see Figure 2", "(c)), so the adverb embedding may not be well learned and updated; (2) for some adverb, such as worse' , the adjective definition is much more common, so the pre-trained model, BART, may bias towards the adjective meaning.", "words and the same word with different definitions, such as star' and satisfy' .", "Moreover, our model can generate plausible examples for different inflected forms of words, such as sentence' .", "Table 8 shows that we can control the length and lexical complexity of examples generated with our model by varying the control labels.", "To summarize, our model not only can generate meaningful examples for existing words, but also has a strong control ability over the length and lexical complexity of outputs.", "Please refer to Appendix E, F and G for the effect of word frequencies, unseen words and the size of training data.", "Please refer to Appendix I for more detailed sample analysis.", "We conduct a human evaluation to further compare our model with asyn-B/F and GBS (rows 8, 9 and 11 of Table 3).", "For each model, we randomly select 50 generated examples and invite three annotators 7 to label the sentences.", "Annotators first rate the sentence fluency on a 5-point Likert scale from 1 (not fluent) to 5 (extremely fluent).", "Then, annotators assess whether the meaning of the targeted word in the output is the same as the given definition on a 3-point Likert scale, from 1 (totally different) to 3 (exactly the same).", "Finally, annotators judge whether the POS of the targeted word in the output is consistent with the given POS.", "We show the detailed annotation method in Appendix D. As shown in Table 9, our proposed model outperforms baselines in human evaluation on all metrics.", "PCCs between two automatic evaluation metrics (DefA, POSA) and related human evaluation scores are 73.5% and 90.3% ( p -value < 0.05), indicating positive and strong correlations.", "We also conduct a human evaluation to assess our model's control ability over readability.", "We first randomly select 50 groups of examples generated by our model with different LC (10, 20, 7 All annotators are Ph.D. students and are independent of our research group. 617 30) + LC 14 .", "Then, we ask annotators to rank the sentences on readability in each group.", "The most difficult sentence receives a score of 3, the others receive scores of 2, 1.", "Annotators can give the same rank to different examples if they have no preference.", "As shown at the bottom of Table 9, the difficulty of the generated sentences increases with LC, verifying that our model can control the readability of outputs via LC control tokens.", "Inter-rater agreement measured by Fleiss' kappa (Fleiss, 1971) is 0.51, 0.73, 0.90 and 0.60 for fluency, definition, POS and readability, indicating moderate, substantial, almost perfect and moderate inter-rater agreement, according to Landis and Koch (1977).", "Word Sense Disambiguation.", "Word Sense Disambiguation (WSD) (Navigli, 2009) is a fundamental task and long-standing challenge in NLP, which aims to associate an ambiguous word in context with the exact sense from a finite set of possible choices.", "Previous work formulates the task as a token classification problem (Raganato et al., 2017) or sentence-pair (context and gloss pair) classification problem (Huang et al., 2019).", "WiC (Pilehvar and Camacho-Collados, 2019) is framed as a binary classification problem, which aims to identify if the occurrences of the targeted word in the first context and second context correspond to the same meaning or not.", "Our proposed work is related to word disambiguation, yet it is a generation task, which is more challenging.", "Controllable Text Generation.", "Controllable text generation aims to generate text in a controlled way, which has attracted wide attention.", "One line of research injects pre-specified keywords into outputs by controlling the decoding process (Mou et al., 2015; Hokamp and Liu, 2017; Post and Vilar, 2018) or refining candidate outputs iteratively (Miao et al., 2019; Sha, 2020; He and Li, 2021; He, 2021).", "Another kind of work uses control tokens to manipulate text attributes, such as the length (Kikuchi et al., 2016; Fan et al., 2018), topic (Ficler and Goldberg, 2017; Keskar et al., 2019), and grade level for text simplification (Scarton and Specia, 2018; Nishihara et al., 2019).", "In this paper, we first introduce the dictionary example generation task, which also requires the targeted word to appear in outputs.", "To this end, we use a target-word-aware model to generate examples for given words.", "Different from the former line of work, our proposed model does not interfere with the decoding process, thus reducing the inference time and improving the generation quality.", "Moreover, we expect to tailor-made outputs for different audiences.", "Inspired by the latter kind of work, our model takes readability-related control tokens to generate suitable example sentences with the desired readability.", "Dictionary Example Generation.", "Two recent works are related to dictionary example generation.", "One work is GPT-3 (Brown et al., 2020), a large-scale autoregressive language model.", "To qualitatively test GPT-3's ability for the few-shot task of using a new/nonexistent word in a sentence, Brown et al. (2020) gave GPT-3 the definition of a nonexistent word, such as screeg, and then asked GPT-3 to use it in a sentence.", "However, they did not formally define this task.", "Similar to our work, another concurrent work (Barba et al., 2021) also gives a formal statement of the dictionary example generation task.", "However, they did not evaluate the quality of generated examples directly.", "In their work, they aimed to improve WSD models by augmenting WDS datasets with the generated examples.", "Compared with their work, we directly evaluate whether the targeted work in the generated example reflects the given sense and POS tag with the proposed BERT-based classifiers.", "Our work also explores how to generate suitable examples for different targeted audiences.", "In this paper, we first introduce the dictionary example sentence generation problem, and propose a controllable target-word-aware model and several strong baselines for it.", "We propose two BERT-based classifiers to evaluate the definition and POS accuracy of generated examples.", "Our experiment results on the Oxford dictionary dataset show that our model outperforms baselines in most metrics and can generate appropriate examples meeting different audiences' understanding levels.", "We would like to thank the anonymous reviewers for their constructive and informative feedback.", "This project is supported by the funding from HKU-SCF FinTech Academy." ]
[ "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "objective", "objective", "objective", "other", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "result", "method", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "method", "objective", "abstain", "method", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "objective", "objective", "objective", "result", "other", "other" ]
[ "A central concern in Computational Social Sciences (CSS) is fairness: where the role of NLP is to scale up text analysis to large corpora, the quality of automatic analyses should be as independent as possible of textual properties.", "We analyze the performance of a state-of-the-art neural model on the task of political claims detection (i.e., the identification of forward-looking statements made by political actors) and identify a strong frequency bias: claims made by frequent actors are recognized better.", "We propose two simple debiasing methods which mask proper names and pronouns during training of the model, thus removing personal information bias.", "We find that", "(a) these methods significantly decrease frequency bias while keeping the overall performance stable; and", "(b) the resulting models improve when evaluated in an out-of-domain setting.", "In recent years, NLP methods have found increasing adoption in the social sciences as part of the movement towards Computational Social Sciences or CSS (Lazer et al., 2009).", "An important part of the appeal of CSS is the promise to scale up the amount of data under consideration: from what can be annotated manually to what can be analyzed automatically, typically an increase by several orders of magnitude, enabling a paradigm shift towards new research questions (Chang et al., 2014).", "However, this shift comes with new challenges: if the analyses are carried out by a machine, how can we trust that any outcomes really stem from the underlying data, rather than from processing artifacts?", "Consequently, CSS must be crucially interested in the algorithmic fairness or (absence of) bias of the underlying machine learning methods (e.g., Binns, 2018; Canetti et al., 2019).", "However, work on this topic in NLP over the last years has found Angela Merkel called for swift tax cuts.", "that more applications contain biases than not, including lexical semantics (Bolukbasi et al., 2016), emotion detection (Kiritchenko and Mohammad, 2018), coreference (Zhao et al., 2018), recommendation generation (Chakraborty et al., 2019) and textual inference (Rudinger et al., 2017).", "It is therefore surprising that, to our knowledge, the bias of NLP methods applied in the CSS domain have found little attention so far.", "In this paper, we consider the CSS task of political claim analysis (Koopmans and Statham, 2010), an entity and relation extraction task from the domain of argument(ation) mining (Cabrio and Vil-lata, 2018).", "Its goal is to extract (Actor, Polarity, Claim) tuples from text, as illustrated in Figure 1.", "This is a structured prediction task with the goal of identifying actors, their claims, and polarities (sup-port/opposition).", "We investigate neural models for the claim identification aspect of political claims analysis trained on a German dataset, MARDY (Pado et al., 2019), and find that these models exhibit a strong frequency bias: claims made by frequently occurring actors are retrieved with higher recall than claims by infrequently mentioned actors.", "This is worrying, because it means that actors who repeat their claims often will now receive 'pref-erential treatment' in the aggregated analysis and, arguably, be perceived as even more prominent than they are (Hovy and Spruit, 2016).", "We interpret these patterns as overfitting of the claim detection model: it relies too much on actor mentions (i.e., either proper names or pronouns) as indicators of claims.", "To debias the model, we propose three methods: (1) mask the actor information by anonymizing referential expressions in the texts, which masks actor information; (2) train claim detectors adversarially by actor frequency; (3) assign more weight to low-frequency training examples in the loss function.", "We find that actor masking leads to almost no loss in performance but greatly reduces the frequency bias, at the same time improving out-of-domain generalization.", "Task.", "For political science, the analysis of political debate provides a window into the process of decision making that is crucial for democracy (Leifeld, 2016).", "An influential framework in this area is political claims analysis (Koopmans and Statham, 2010) which is interested in the association between political actors and their claims (cf. Figure 1), where claims are statements about specific future actions that the actor endorses or rejects.", "Such actor-claim pairs can be aggregated into discourse networks and analyzed for aspects such as discourse coalitions or developments over time (Haunss, 2017; Wang and Wang, 2017).", "From an NLP perspective, full political claims analysis is a relatively complex process (Pado et al., 2019) that involves recognizing entities (actors), opinions (claims), and the relations between them (actorclaim pairs).", "In this paper, we focus on the task of claims detection in a narrow sense, namely the identification of claim spans in running text (cf. the right-hand markable in Figure 1), a task that is structurally related to (shallow) argument mining (Swanson et al., 2015; Vilares and He, 2017).", "Dataset.", "We use the MARDY dataset, a corpus of articles relevant to the German immigration debate of the year 2015 drawn from the major German newspaper Die Tageszeitung (taz) (Pado et al., 2019).", "The corpus consists of 959 articles with a total of 1841 claims with an average length of 20 tokens.", "Each claim is associated with an actor.", "For about half of the claims (879), the actor is local (i.e., inside the claim); for the rest, it is non-local (i.e., somewhere in the document context).", "Model.", "We investigate a model inspired by the best claims detection model from Pado et al. (2019).", "Our claims detector is also a transformer based on BERT (Devlin et al., 2019) with a default pretrain-Actor freq.", "ing objective.", "However, we make two changes:", "(a), instead of framing the task as token sequence labeling, we perform sentence-level classification by placing a Softmax classifier on top of BERT, using the final hidden state of the special [CLS] token as sentence meaning representation;", "(b), instead of using the Multilingual BERT model, which is known to have problems with finding sensible subword units for German, we use a BERT model trained solely on German corpora 1 .", "On the standard train-ing/test split of the MARDY dataset, where Pado et al. (2019) report an Macro average F1 score of 65.5 (P=64.8, R=66.2).", "Using the same token-level evaluation, our model achieves an moderately improved F1 score of 67.6 (P=64.1, R=71.3), with a similar precision and a 5% increase in recall.", "We carry out an analysis of the predictions of our claim detector on the MARDY dataset with 10-fold CV to maximize the amount of data under consideration.", "We group the actors into three frequency bands using the gold standard actor annotation, as shown in Table 1.", "Almost half of the actors occur only once, indicating that actors follow a Zipfian distribution as typical for language data.", "We now evaluate the performance of our model per actor frequency band.", "Since actor prediction is not part of the model, we only analyze recall at the claim (not token) level.", "We also restrict ourselves to the 879 claims with local actors, assuming that local actors influence claim detection.", "Indeed, as Table 1 shows, the prediction quality differs substantially across actor frequency bands: in particular claims made by hapax legomena actors (i.e., single-occurrence actors) show a worse recall (74.5%) than frequent actors (7778%).", "1 https://deepset.ai/german-bert", "helps in its task.", "Its sensitivity to actor frequency indicates that the presence of a previously seen actor name is a strong indicator for the presence of a claim.", "We nevertheless believe that this is an undesirable situation, since it means that the model extracts a systematically biased set of claims from the corpus: claims made by frequently mentioned actors (such as office holders or spokespersons) are reinforced, while claims made by infrequently mentioned actors are disregarded.", "This type of bias can lead to 'echo chambers' (Del Vicario et al., 2016) and confers overly high visibility onto frequent actors (Hovy and Spruit, 2016).", "To avoid exactly this type of bias, discourse analysis in social science generally factors out the 'newsworthiness' of claims by disregarding its number of mentions.", "We computationally debias our claims classifier.", "Computational debiasing methods generally either modify the model objectives (e.g., Bolukbasi et al., 2016) or the input data (e.g., Zhao et al., 2018).", "We experiment with both approaches.", "Actor Masking.", "Actor Masking is a data modification method where we mask all referential expressions referring to political actors by replacing the referential expressions with placeholders.", "We consider two variants: MASKNAME This model masks the most frequent realization option of political actors, namely proper names of persons.", "We opera-tionalize 'person name' as all phrases marked as PER by the SpaCy German Named Entity Recognizer (F-Score 83.0 on WikiNER).", "2 MASKNAMEPRON This model masks persons names as above.", "In addition, it masks all personal pronouns in MARDY , which can also provide actor information, even though in a more indirect and thus less informative way.", "It uses the same placeholder.", "These masking procedures make it impossible for the claim detector to use information about the actor identity.", "The motivation is similar to using de-noising autoencoders for text representation, which introduce perturbations in the input to encourage models to discover stable latent rather than surface text properties (Glorot et al., 2011).", "adversarial training to have the model learn representations of the input that do not exhibit biases (in our case, frequency biases) in any substantial way McHardy et al. (2019).", "Concretely, we train our model simultaneously to predict whether the given text contains any claim and to prevent the adversarial component from predicting how frequently the claim actor occurs (Figure 2): The adversarial and main components share the feature extractor whose parameters ( f ) are therefore updated by the gradients coming through the objective functions of both model parts.", "Formally, let J c and J fr be the cross-entropy loss functions of the main (claim detector) and adversarial (frequency detector) components, let be the meta-parameter for the trade-off between the two losses.", "3 , and let be the learning rate.", "Then the updates are defined as: c := c J c c and fr := fr J fr fr (1) f := f (cid:18) J c f J fr f (cid:19) (2) Eq.", "(2) causes the feature extractor to receive the opposite gradients from the two model components, maximizing the loss of the frequency detector.", "Sample Weighting.", "Sample weighting aims to mitigate frequency bias by punishing model more for false negative predictions on claims by infrequent actors.", "Each training example is assigned to a weight which reflects the importance of the instance when computing the loss function.", "Concretely, we introduce three weights ( low , mid , high ) for the three actor frequency bands from Table 1.", "4 Parameter updates (i.e., back-propagation) 3 Following hyper-parameter search, we set to 1.0.", "4 Following hyperparameter search, we set low = 0 .", "5 , mid = 0 .", "3 and high = 0 .", "2 , and assign = 0 .", "1 to negative instances (i.e. non-claims).", "We first investigate the effect of frequency debasing in a standard in-domain setting, re-using the setup from Section 3 (10-fold cross-validation, claim-level evaluation) to train one standard and four debiased models.", "Table 2 shows results on all claims.", "5 We find that the two actor masking models show a slight increase in recall (around 1 point), accompanied by a similar drop in precision.", "Thus, the F2-Scores of the three models are more or less on par (the differences are not statistically significant): the debiased models perform as well as STANDARD despite the loss of information in the dataset.", "The two ML-focussed debiasing methods have a completely different impact on the claim detector: Both ADVERSARIAL and SAMPLEWEIGHTING improve the precision significantly, but suffer a decrease in recall.", "Thus, the data modification methods, in particular MASKNAMEPRON , appear competitive.", "Next, we repeat the analysis by frequency band on the set of local claims from Section 3 for all 5 The difference between the F-score reported here for STANDARD and the one from Section 2 is the difference between token-level F1-Score and claim-level F2-Score evaluation.", "We believe that claim-level evaluation provides a more meaningful evaluation of claim identification but have reported token-level evaluation above for comparison to previous work.", "Regarding the precise metric, weighting recall higher then precision provides a better match for a semi-automatic setup with manual post-correction (Haunss et al., 2020), which is arguably necessary at the present level of performance.", "five models.", "Table 3 shows the recall values.", "We find that actor masking leads to a slight decrease in recall (under 1 point) for actors from the High band: we believe that this is unproblematic, given the redundancy of newspaper reporting.", "At the same time, brings about substantial improvements in recall for both the Low (+7 points) and the Mid (+5 points) actor frequency bands so claims advanced by infrequent actors have a substantially better chance of being recognized by the system.", "As for the representation-based methods, adversarial training does also, to some extent, lead to fairer claim detector: It mitigates the differences across low and high bands; however, it also leads to significant decrease in overall recall.", "SAMPLEWEIGHTING is the least effective debiasing method, performing rather badly on the low frequency band.", "Regarding a more qualitative understanding of the actor masking methods, consider the following claim which was recognized by both debiased models but not STANDARD : Der Dresdner Superintendent Christian Behr ruft zu Nachstenliebe und Dialogbereitschaft auf.", "Comparing the two actor masking methods, the improvements in MASKNAMEPRON surpass those of MASKNAME , which indicates that a more consistent treatment of referring expressions by replacing both proper names and pronouns is advantageous, maybe due to the fact that there is often a relatively free choice between pronouns and proper names.", "We now carry out a second experiment following the intuition that models relying on less specific features generalize better to out-of-domain data which was also the original motivation for denois-ing autoencoders (Glorot et al., 2011).", "As out-of-domain dataset, we used the AKW (Haunss et al., 2013) corpus.", "This is another German corpus for the task of political claims identification, which covers the debate on the future of nuclear energy use in Germany in the four months after the nuclear Precision Recall F2-Score STANDARD 19.8 40.4 33.4 MASKNAME 21.3 43.2 35.8 MASKNAMEPRON 20.5 42.2 34.8 ADVERSARIAL 26.0 33.0 31.3 SAMPLEWEIGHTING 22.8 40.0 34.8 Table 4: Exp. 2 (cross-domain): Results for all claims.", "disaster of Fukushima, Japan in March 2011.", "The dataset contains 828 articles and 934 claims, all associated with one of 348 unique actors.", "We re-use the frequency bands computed for MARDY , under the assumption that it is the frequency distribution in the training data that matters for performance.", "AKW differs from the MARDY corpus in the subject of the debate, the time span, and the newspapers ( Die Welt and Suddeutsche Zeitung ).", "We used AKW solely as test set for models trained on MARDY .", "Table 4 shows the main results.", "The significant decrease in F-scores compared to Table 2 shows that current claim detection is substantially domain specific.", "Nevertheless, both MASKNAME (+2 points F-score), MASKNAMEPRON (+1 point F-score) and SAMPLEWEIGHTING (+1 point F-score) generalize somewhat better than STANDARD .", "MASKNAMEPRON and MASKNAME also beat STANDARD in both precision and recall.", "ADVERSARIAL , on the other hand, shows a 2.0 points decrease in F2-score as a result of the overall decrease in Recall compared to STANDARD .", "Table 5 shows recall values for claims by author frequency bands.", "As in Exp. 1, this analysis is restricted to claims with locally realized actors.", "6 We observe a similar pattern to Exp. 1 (cf. Table 3) for actor masking models: (1) The STANDARD model suffers from frequency bias in the form of worst performance on the Low band (-7 points compared 6 We only consider actors that occur in MARDY , assuming that it is the frequency in the training set that matters. to High); (2) both actor masking models improve performance for the Low band, thus decreasing frequency bias.", "The two representation-based models, on the other hand, show an overall low recall with no decrease in frequency bias, and particularly bad results on the Low band for ADVERSARIAL .", "This paper has discussed the task of political claims analysis as an example of Computational Social Science where NLP methods are finding adoption to scale analysis to large data sets.", "We have argued that this scenario must be aware of systematic biases in the output of the NLP methods.", "The NLP community has mostly focused on biases grounded in extralinguistic reality, e.g., gender (Bolukbasi et al., 2016; Rudinger et al., 2018; Stanovsky et al., 2019), race (Kiritchenko and Mohammad, 2018), or age (Hovy and Sgaard, 2015).", "We identified frequency as a language-internal bias present in a current neural model in political claims analysis.", "It warrants the same kind of attention as other bias types: lower recall for infrequent actors is inherently unfair, hitting those who can afford least to have their contribution overlooked.", "We compared two approaches to mitigating frequency bias in political claims detection and tested them on in-domain and out-of-domain settings.", "We found that a simple data modification strategy does as good as or better than modifying the model objective.", "Actor masking improves recall for infrequent actors without affecting overall performance, and, as a side benefit, also improves out-of-domain generalization.", "While we only evaluated the strategy on one model, we believe its benefits carry over to other model architectures and similar tasks.", "Clearly, actor frequency is only one of a large number of potential frequency-related biases.", "Since frequency is known to be strongly correlated with performance in machine learning-based NLP, such biases should be investigated more systematically in areas building on NLP such as Computational Social Sciences.", "To remove these biases, however, presumably more sophisticated methods will be necessarily in the general case.", "Funding was provided by Deutsche Forschungs-gemeinschaft (DFG) through project MARDY in SPP RATIO.", "We would like to thank G. Lapesa, N. Blokker and S. Haunss for valuable comments." ]
[ "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "abstain", "abstain", "abstain", "other", "other" ]
[ "Neural sequence-to-sequence models have been successfully applied to text compression.", "However, these models were trained on huge automatically induced parallel corpora, which are only available for a few domains and tasks.", "In this paper, we propose a novel interactive setup to neural text compression that enables transferring a model to new domains and compression tasks with minimal human supervision.", "This is achieved by employing active learning, which intelligently samples from a large pool of unlabeled data.", "Using this setup, we can successfully adapt a model trained on small data of 40k samples for a headline generation task to a general text compression dataset at an acceptable compression quality with just 500 sampled instances annotated by a human.", "Text compression is the task of condensing one or multiple sentences into a shorter text of a given length preserving the most important information.", "In natural language generation applications, such as summarization, text compression is a major step to condense the extracted important content of the source documents.", "But text compression can also be applied in a wide range of related applications, including the generation of headlines (Filippova et al., 2015), captions (Wubben et al., 2016), subtitles (Vandegehinste and Pan, 2004; Luotolahti and Ginter, 2015), and the compression of text for small screens (Corston-Oliver, 2001).", "Neural sequence-to-sequence (Seq2Seq) models have shown remarkable success in many areas of natural language processing and specifically in natural language generation tasks, including text compression (Rush et al., 2015; Filippova et al., 2015; Yu et al., 2018; Kamigaito et al., 2018).", "Despite their success, Seq2Seq models have a major drawback, as they require huge parallel corpora with pairs of source and compressed text to be able to learn the parameters for the model.", "So far, the size of the training data has been proportional to the increase in the model's performance (Koehn et al., 2003; Suresh, 2010), which is a major hurdle if only limited annotation capacities are available to manually produce a corpus.", "That is why existing research employs large-scale automatically extracted compression pairs, such as the first sentence and the presumably shorter headline of a news article.", "However, such easy-to-extract source data is only available for a few tasks, domains, and genres and the corresponding models do not generalize well from the task of headline generation to other text compression tasks.", "In this paper, we propose an interactive setup to neural text compression, which learns to compress based on user feedback acquired during training time.", "For the first time, we apply active learning (AL) methods to neural text compression, which greatly reduces the amount of the required training data and thus yields a much more data-efficient training and annotation workflow.", "In our experiments, we find that this approach enables the successful transfer of a model trained on headline generation data to a general text compression task with a minimum of parallel training instances.", "The objective of AL is to efficiently select unlabeled instances that a user should annotate to advance the training.", "A key component of AL is the choice of the sampling strategy , which curates the samples in order to maximize the model's performance with a minimum amount of user interaction.", "Many AL sampling strategies have proven effective for human-supervised natural language processing tasks other than compression (Hahn et al., 2012; Peris and Casacuberta, 2018; Liu et al., 2018).", "In our work, we exploit the application of uncertainty-based sampling using attention dispersion and structural similarity for choosing samples to be annotated for our interactive Seq2Seq text compression model.", "We employ the AL strategies for", "(a) learning a model with a minimum data, and", "(b) adapting a pretrained model with few user inputs to a new domain.", "In the remaining paper, we first discuss related work and introduce the state-of-the-art Seq2Seq architecture for the neural text compression task.", "Then, we propose our novel interactive compression approach and demonstrate how batch mode AL can be integrated with neural Seq2Seq models for text compression.", "In section 4, we introduce our experimental setup, and in section 5, we evaluate our AL strategies and show that our approach successfully enables", "(a) learning the Seq2Seq model with a minimum of data,", "(b) transfer of a pretrained headline generation model to a new compression task and dataset with minimal user interaction.", "To encourage further research and enable reproducing our results, we publish our code as open-source software.", "1 2 Related Work In this section, we discuss related work to our research concerning: (1) neural text compression models, (2) existing text compression corpora and (3) active learning for neural models.", "Neural text compression.", "Neural text compression models can be broadly classified into two categories:", "(a) deletion-based extractive models and", "(b) abstractive models.", "The goal of the deletion-based models is to delete unimportant words from a source text to generate a shorter version of the text.", "In contrast, abstractive models generate a shorter text by inserting, reordering, reformulating, or deleting words of the source text.", "Previously, deletion-based extractive methods explored various modeling approaches, including the noisy-channel model (Knight and Marcu, 2002; Turner and Charniak, 2005), integer linear programming (Clarke and Lapata, 2007), variational autoencoders (Miao and Blunsom, 2016), and Seq2Seq models (Filippova et al., 2015).", "Similarly, recent abstractive models have seen tree-to-tree transduction models (Cohn and Lapata, 2013) and variations of Seq2Seq models, such as attention (Rush et al., 2015), attentive long short-term memory (LSTM) models (Wubben et al., 2016) 1 https://github.com/UKPLab/ NAACL2019-interactiveCompression and operation networks where the Seq2Seq model decoder is replaced with a deletion decoder and a copy-generate decoder (Yu et al., 2018).", "Filippova et al. (2015) show that Seq2Seq models without any linguistic features have the ability to delete unimportant information.", "Kamigaito et al. (2018) incorporate higher-order dependency features into a Seq2Seq model and report promising results.", "Rush et al. (2015) propose an attention-based Seq2Seq model for generating headlines.", "Chopra et al. (2016) further improve this task with recurrent neural networks.", "Although Seq2Seq models show state-of-the-art results on different compression datasets, there is yet no work which investigates whether large training corpora are needed to train neural compression models and if there are efficient ways to train and adapt them to other datasets with few annotations.", "Text compression corpora.", "Early publicly available text compression datasets are manually curated but small (Cohn and Lapata, 2008; Clarke and Lapata, 2006, 2008).", "These datasets are typically used by unsupervised approaches as they are 200 times smaller in size compared to the annotated data used for training state-of-the-art supervised approaches.", "Filippova and Altun (2013) introduce an extractive compression dataset of 250k headline and first sentence compression pairs based on Google News, which they use for training a supervised compression method.", "Similarly, Rush et al. (2015) create another large abstractive dataset of 4 million headline and first sentence compression pairs from news articles extracted from the Annotated Gigaword corpus (Napoles et al., 2012).", "Although these datasets are large, they predominantly address headline generation for news.", "Creating such large corpora manually for a new task or domain is hard.", "Toutanova et al. (2016) pioneered the manual creation of a multi-reference compression dataset MSR-OANC with 6k sentenceshort paragraph pairs from business letters, newswire, journals, and technical documents sampled from the Open American National Corpus 2 .", "They provide five crowd-sourced rewrites for a fixed compression ratio and also acquire quality judgments.", "This dataset covers multiple genres compared to the large automatically collected compression datasets, and Toutanova et al. (2016) show that neural Seq2Seq 2 https://www.anc.org/data/oanc Figure 1: Pipeline of our interactive text compression model.", "In our work, we go beyond that and investigate strategies to easily adapt pretrained models to such small datasets employing minimal user input.", "Active learning for neural models.", "AL has been successfully applied to various natural language processing tasks, including corpus annotation (Hahn et al., 2012; Yan et al., 2011), domain adaptation (Chan and Ng, 2007), personalized summarization (P. V. S. and Meyer, 2017), machine translation (Haffari and Sarkar, 2009), language generation (Mairesse et al., 2010), and many more.", "Only recently, it has been applied to neural models: Wang et al. (2017a) propose an AL approach for a black box semantic role labelling (SRL) model where the AL framework is an add-on to the neural SRL models.", "Peris and Casacuberta (2018) use AL in neural machine translation.", "They propose quality estimation sampling, coverage sampling, and attention distraction sampling strategies to query data for interactive machine translation.", "Liu et al. (2018) additionally propose an AL simulation trained on a high-resource language pair to transfer their model to low-resource language pairs.", "In another line of research, Sener and Savarese (2018) discuss a core-set AL approach as a batch sampling method for neural image classification based on convolutional neural networks.", "Although AL techniques have been widely used in natural language processing, to our knowledge, there is yet no work on the use of AL for neural text compression.", "We fill this gap by putting the human in the loop to learn effectively from a minimal amount of interactive feedback and for the first time, we explore this data-efficient AL-based approach to adapt a model to a new compression dataset.", "To address this research problem, we first describe the neural Seq2Seq text compression models we use.", "Then, we introduce our active learning strategies to select the training samples interactively for in-domain training as well as for domain adaptation, and we describe a novel interactive neural text compression setup.", "Figure 1 illustrates the main components of our system.", "In this work, we employ state-of-the-art Seq2Seq models with attention (Seq2Seq-gen) (Rush et al., 2015) and pointer-generated networks with coverage (Pointer-gen) (See et al., 2017) as our base models, which we use for our AL-based interactive text compression setup.", "Both Seq2Seq models are built upon the encoder-decoder framework by Sutskever et al. (2014).", "The encoder encodes the input sequence x = ( x 1 , x 2 .., x n ) represented by an embedding matrix into a continuous space using a bidirectional LSTM network and outputs a sequence of hidden states.", "The decoder is a conditional bidirectional LSTM network with attention distribution (Luong et al., 2015) a ji = exp( e ji ) (cid:80) nk =1 exp( e jk ) (1) where e ji is computed at each generation step j with the encoder states h enc i and the decoder states h dec j : e ji = q tanh( W enc h h enc i + W dec h h dec j + b att ) (2) where q , W enc h , W dec h and b att are learnable parameters.", "The attention distribution a ji is used to compute the weighted sum of the encoder hidden states, also known as the context vector c j = n (cid:88) i a ji h enc i (3) To obtain the vocabulary distribution P vocab j at generation step j , we concatenate the fixed context vector with the decoder state h dec j and pass it through two linear layers: P vocab j = softmax( W v ( W (cid:48) v [ h dec j ; c j ] + b (cid:48) v ) + b v ) (4) where W v , W (cid:48) v , b v and b (cid:48) v are learnable parameters.", "P vocab j is a probability distribution over all words in the vocabulary V .", "Based on the vocabulary distribution, the model generates the target sequence y = y 1 , y 2 , . . . , y m , m n with y j = argmax w P vocab j ( w ) , w V (5) for each generation step j .", "Finally during training, we define the loss function for generation step j as the negative log likelihood of the target word y j and the overall loss function for the target word sequence as L : L = 1 m m (cid:88) j =0 log P vocab j ( y j ) (6) Another state-of-the-art approach we use for our experiments is the pointer-generator networks (Pointer-gen) proposed by See et al. (2017).", "This model uses a pointer-generator network that determines a probability function to generate the words from the vocabulary V or copy the words from the source text by sampling from the attention distribution a ji as shown in Eq.", "8.", "The model achieves this by calculating an additional generation probability p gen for generation step j , which is calculated from the context vector c j , the decoder state h dec j , and the current input to the decoder x (cid:48) j : p gen = ( W Tc c j + W Th dec h dec j + W Tx (cid:48) x (cid:48) j + b gen ) (7) P j ( w ) = p gen P vocab j ( w ) + (1 p gen ) n (cid:88) i =0 a ji (8) where vectors W c , W h dec , W x (cid:48) , b gen are learnable parameters, n is the number of words in the source text and is the sigmoid function.", "The model also uses an extra feature of coverage to keep track of words generated by the model to discourage repetition.", "In the coverage model, a coverage vector is calculated which is the sum of the attention distribution across all the previous decoding steps and it is passed on as an extra input to the attention mechanism: c ji = j 1 (cid:88) k =0 a ki (9) e ji = q tanh( W enc h h enc i + W dec h h dec j + W c c ji + b att ) (10) where W c is an additional learnable parameter.", "Toutanova et al. (2016) show that Seq2Seq models, which perform well on large news headline generation datasets, fail to achieve good performance on their MSR-OANC multi-genre compression dataset.", "A major issue with training Seq2Seq models is the lack of domain-specific data and the expensive process to create parallel compression pairs.", "It is therefore indispensable to minimize the cost of data annotation.", "Thus, AL comes into play whose key element is to find a strategy for selecting samples the user should annotate which yield a more efficient training process.", "For text compression, we suggest AL strategies to maximize the model's coverage and the diversity of the samples.", "To this end, we build upon work in uncertainty sampling by (Peris and Casacuberta, 2018; Wang et al., 2017b) and propose a new strategy to predict the sample diversity at a structural level.", "Coverage constraint sampling (Coverage-AL).", "An important factor on which text compression models are evaluated is the coverage (Marsi et al., 2010).", "Coverage can be defined as the text compression models being able to learn the deletion or generation rules from the training samples and apply them on an input source text.", "Wu et al. (2016) first proposed the idea of using attention weights to calculate coverage penalty for active learning based machine translation systems.", "The attention weights were further extended by Peris and Casacuberta (2018) to estimate an attention dispersion based uncertainty score for a sentence.", "The idea of attention dispersion is that if the neural Seq2Seq compression model is uncertain then the attention weights will be dispersed across the source text while generating the target words.", "The samples with higher dispersion will have their attention weights uniformly distributed across the source sentences.", "Thus, the goal is to find the samples with high uncertainty based on attention dispersion.", "As we want to define to the extent to which the attention distribution differs from a normal distribution we propose to use a skewness score.", "The skewness score calculates the attention dispersion while decoding a target word y j .", "a ji is the attention weight assigned by the attention layer to the i -th source word when decoding the j -th target word and 1 n is the mean of the attention weights of the target word y j .", "The skewness for a normal distribution is zero, and since we are interested in the skewness of samples with heavy tails, we take the negative of the skewness averaged across all target words to obtain the uncertainty coverage score C score .", "Diversity constraint sampling (Diversity-AL).", "Diversity sampling methods have been used in information retrieval (Xu et al., 2007) and image classification (Wang et al., 2017b).", "The core idea is that samples that are highly similar to each other typically yield little new information and thus low performance.", "Similarly, to increase the diversity of the samples in neural text compression, we propose a novel scoring metric to measure the diversity of multiple source texts at a structural level.", "Our intuition is that integrating part-of-speech, dependency and named entity information is useful for text compression, e.g., to learn which named entities are important and how to compress a wide range of phrase types and syntactically complex sentences.", "Thus, we consider part of speech tags, dependency trees, and named entity embeddings and calculate the structural similarity of the source text with regard to the target text.", "We use a multi-task convolutional neural network similar to Sgaard and Goldberg (2016) trained on OntoNotes and Common Crawl to learn the structural embeddings consisting of tag, dependency and named entity embeddings.", "The diversity score D score is calculated using the cosine distance between the average of the structural embeddings of the words in the source sentence and the average of the structural embeddings of the words in the target compression as in Eq.", "13: D score (x ,", "where E struc ( ) is the average structural embedding of a text.", "These AL sampling strategies are applied interactively while training to make better use of the data by selecting the most uncertain instances.", "Additionally, both strategies can be applied for domain adaptation by actively querying user annotations for a domain-specific dataset in an interactive text compression setup, which we describe next.", "In this subsection, we introduce our interactive text compression setup.", "Our goal is to select the batch of samples for training efficiently with minimal samples and to become able to transfer the models to new datasets for different domains and genres with few labeled data.", "We consider an initial collection of parallel instances D = { (x i , y i ) | 1 i N } consisting of pairs of input text x i and their corresponding compression y i .", "Additionally, we consider unlabeled instances D (cid:48) = { x i | i > N } , for which we only know the uncompressed source texts.", "Our goal is to sample sets of unlabeled instances S t D (cid:48) which should be annotated by a user in each time step t .", "The interactive compression model can only see the labeled pairs from the initial dataset D in the beginning, but then incrementally learns from the user annotations.", "Algorithm 1 provides an overview of our interactive compression setup.", "The inputs are the labeled compression pairs D and the unlabeled source texts D (cid:48) .", "D is used to initially train the neural text compression model M .", "In line 5, we start the interactive feedback loop iterating over t = 0 , . . . , T .", "We first sample a set of unlabeled source texts S t (line 6) by using our AL strategies introduced in section 3.2 and then loop over each of the unlabeled samples to be annotated or supervised by the human in line 10.", "As the user feedback in the current time step of sample S t , we obtain the compressions Y t of the sampled source texts S t from the user and use them for online training of the model M .", "After T iterations or if there are no samples left for querying (i.e., S t = ), we stop the iteration and return the updated Seq2Seq model M .", "For our experiments, we use the large Google News text compression corpus 3 by Filippova and Altun (2013), which contains 250k automatically extracted the deletion-based compressions from aligned headlines and first sentences of news articles.", "Recent studies on text compression have extensively used this dataset (e.g., Zhao et al., 2018; Kamigaito et al., 2018).", "We carry out in-domain active learning experiments on the Google News compression corpus.", "To evaluate our interactive setup, we adapt the trained models to the MSR-OANC text compression corpus by Toutanova et al. (2016), which contains 6k crowdsourced multi-genre compressions from the Open American National Corpus.", "This corpus is well-suited to evaluate our interactive setup, since it is sourced from mixture of newswire, letters, journals, and non-fiction genres, 3 https://github.com/ google-research-datasets/sentence-compression in contrast to the Google News corpus covering only newswire.", "For evaluating the compressions against the reference compressions, we use a Python wrapper 4 of the ROUGE metric (Lin, 2004) with the parameters suggested by Owczarzak et al. (2012) yielding high correlation with human judgments (i.e., with stemming and without stopword removal).", "5 4.2 Preprocessing and Parameters To preprocess the datasets, we perform tokeniza-tion.", "We obtain the structural embeddings for a sentence using spaCy 6 embeddings learned using a multi-task convolutional neural network.", "To evaluate and assess the effectiveness of our active learning-based sampling approaches, we set up our interactive text compression approach for the two state-of-the-art Seq2Seq models consisting of a generative model (Seq2Seq-gen) and a generate-and-copy model (Pointer-gen) as described in Section 3.1.", "For the neural Seq2Seq text compression experiments, we set the beam size and batch size to 10 and 30 respectively.", "We use the Adam optimizer (Kingma and Ba, 2015) for the gradient-based optimization.", "Finally, the parameters for the neural network parameters like weights and biases are randomly initialized.", "In order to asses the effectiveness of AL for neural text compression we extend the OpenNMT 7 implementations with our interactive framework following Algorithm 1.", "The sampling strategy selects instances to be annotated interactively by the user in batches.", "Next, the neural text compression model is incrementally updated with the selected samples.", "Due to the presence of a human in the loop, it typically demands real user feedback, but the cost of collecting sufficient data for various settings of our models is prohibitive.", "Thus in our experiments, the users were simulated by using the com-4 https://github.com/pltrdy/files2rouge 5 -n 2 -c 95 -r 1000 -a -m 6 https://spacy.io/ 7 https://github.com/OpenNMT/OpenNMT-py Methods UB Random Coverage-AL Diversity-AL R1 R2 RL R1 R2 RL R1 R2 RL R1 R2 RL Seq2Seq-gen 59.94 52.08 59.78 61.60 50.03 61.37 62.89 51.38 62.56 62.54 50.19 62.13 Pointer-gen 79.26 71.77 79.08 71.61 61.15 71.28 78.11 70.50 77.89 77.45 70.30 77.38 Table 2: ROUGE-1 (R1), ROUGE-2 (R2) and ROUGE-L (RL) achieved by the state-of-the-art models using our sampling strategies evaluated on the Google compression test set.", "pression pairs from our corpus as the sentences annotated by the user.", "Our experiments address two main research questions for in-domain training and domain adaptation of neural text compression:", "Which active learning strategies are useful in text compression to select training samples such that higher performance can be achieved with a minimum of labeled instances?", "Which instances are to be annotated interactively by the user such that the model adapts quickly to a new dataset?", "active learning experiments, we choose the Google News text compression training corpus and sample for corpus sizes between 10% and 100% in", "ten percent point steps.", "As a baseline, we use a random sampling strategy to test the state-of-the-art Seq2Seq neural text compression models.", "Figure 2 suggests that our coverage-based sampling (Coverage-AL) and diversity-based sampling (Diversity-AL) strategies outperform the random sampling strategy throughout all training sizes.", "A key observation is that our sampling strategies are behind the upper bound by just 0.5% ROUGE-2 when only 20% of the training data is used.", "Table 2 illustrates the results of our sampling strategies when 20% of the data is used for training.", "All the results are in comparison to the upper bound (UB) receiving 100% of the training data.", "Coverage-AL performs better than the Diversity-AL for both the Seq2Seq-gen and Pointer-gen models.", "However, they are still not effective in the Seq2Seq-gen model where random sampling performs on par with the active learning sampling approaches.", "We believe this is due to the Seq2Seq-gen model's inability to copy from the source text in the sampled set as a consequence of active learning in the batch setting.", "Whereas for Pointer-gen model, we observed that both Coverage-AL and Diversity-AL strategies of adding new samples for training had a greater impact when the model has not adapted.", "We attribute the effectiveness of the Coverage-AL strategy over Diversity-AL to the exploitation of the model uncertainty, as the Diversity-AL only uses the similarity based on the samples, but misses to integrate the model uncertainty.", "Table 3 presents an example sentence compression pair from the Google News dataset and the generated compressions of both neural Seq2Seq models when using one of the three sampling strategies.", "The example shows that detailed descriptions like the names of the ships JING GANGSHA and HENG SHUI are dropped by all models.", "In particular, the Seq2Seq-gen model has the problem of generating words not present in the original text (e.g., Source text: Two Chinese war ships , JING GANGSHA and HENG SHUI arrived at the port of Trincomalee on 13 th January 2014 on a good will visit .", "toddlers, Scottsbluff).", "In contrast, the Pointer-gen model's ability to copy from the original text restrains the model from generating irrelevant words.", "Although Diverysity-AL based models recognized the phrasal constructs crucial for the sentence meaning, Coverage-AL generated the closest compression to the reference.", "Active learning for domain adaptation.", "To test our interactive Seq2Seq model using active learning strategies for the domain adaptation scenario, we train the model on the Google News compression corpus and test it on the multi-genre MSR-OANC compression dataset.", "Additionally, for domain adaptation, the neural Seq2Seq model is updated incrementally using our interactive compression Algorithm 1.", "The sampling strategies select the instances to be interactively annotated by the user.", "As the cost of interactive experimentation with real users, we use simulated feedback from the labeled sentence compressions from the MSR-OANC training data.", "The two sampling strategies used for in-domain active learning are used for interactive compression with the state-of-the-art Seq2Seq models.", "Table 4 illustrates the results of the interactive text compression model when applied to the MSR-OANC text compression dataset.", "One interesting observation is the fact that our sampling strategies at 10% of the training data ( 500 samples) perform better than models trained on in-domain training data (MSR-OANC ID) with 5k training instances by +8.3% and +8.2% ROUGE-2.", "Figure 3 shows the results for the various sample sizes of the 5k training instances.", "The results show a similar trend as the active learning for the interactive data-selection scenario.", "The Coverage-AL and Diversity-AL strategies do not show sig-nificant differences from each other.", "However, the two active learning strategies achieve on average +2.5% ROUGE-2 better results than the random sampling.", "The results demonstrate that the use of relevant training samples is useful for transferring the models to new domains and genres.", "Table 5 shows an example from the MSR-OANC compression dataset.", "The example illustrates similar compression properties as seen in the in-domain settings.", "In particular, the two models learned to drop appositions, optional modifiers, detailed clauses, etc.", "Additionally, we also observed that the difficult cases where those where there is little to be removed, but due to higher compression ratios during the training, the models removed more than required.", "This confirms the Source text: Given the urgency of the situation in Alaska , Defenders needs your immediate assistance to help save Alaska 's wolves from same day airborne land and shoot slaughter .", "cause for lower ROUGE scores compared to the Google News corpus.", "We propose a novel neural text compression approach using a neural Seq2Seq method with an interactive setup that aims at", "(a) learning an in-domain model with a minimum of data and", "(b) adapting a pretrained model with few user inputs to a new domain or genre.", "In this paper, we investigate two uncertainty-based active learning strategies with", "(a) a coverage constraint using attention dispersion and", "(b) a diversity constraint using structural similarity to make better use of the user in the loop for preparing training data pairs.", "The active learning based data selection methodology samples the data such that the most uncertain samples are available for training first.", "Experimental results show that the selected samples achieve comparable performance to the state-of-the-art systems, but trained on 80% less in-domain training data.", "Active learning with an interactive text compression model helps in transferring models trained on a large parallel corpus for a headline generation task to a general compression dataset with just 500 sampled instances.", "Additionally, the same in-domain active learning based data selection shows a notable performance improvement in an online interactive domain adaptation setup.", "Our experiments demonstrate that instead of more training data, relevant training data is essential for training Seq2Seq models in both in-domain training as well as domain adaptation.", "In future work, we plan to explore several lines of work.", "First, we intend to investigate further applications of our interactive setup, e.g., in movie subtitle compression or television closed captions where there is no sufficient training data to build neural models.", "On a more general level, the interactive setup and the active learning strategies presented can also be used for other natural language processing tasks, such as question answering, to transfer a model to a new domain or genre.", "This work has been supported by the German Research Foundation as part of the Research Training Group Adaptive Preparation of Information from Heterogeneous Sources (AIPHES) under grant No.", "GRK 1994/1.", "We also acknowledge the useful suggestions of the anonymous reviewers." ]
[ "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "method", "method", "method", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "other", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other" ]
[ "Recent investigations into the inner-workings of state-of-the-art large-scale pre-trained Transformer-based Natural Language Understanding (NLU) models indicate that they appear to know humanlike syntax, at least to some extent.", "We provide novel evidence that complicates this claim: we find that state-of-the-art Natural Language Inference (NLI) models assign the same labels to permuted examples as they do to the original, i.e. they are largely invariant to random word-order permutations.", "This behavior notably differs from that of humans; we struggle with ungrammatical sentences.", "To measure the severity of this issue, we propose a suite of metrics and investigate which properties of particular permutations lead models to be word-order invariant.", "In the MNLI dataset, for example, we find almost all (98.7%) examples contain at least one permutation which elicits the gold label.", "Models are sometimes even able to assign gold labels to permutations that they originally failed to predict correctly.", "We provide a comprehensive empirical evaluation of this phenomenon, and further show that this issue exists for both Transformers and pre-Transformer RNN / ConvNet based encoders, as well as across multiple languages (English and Mandarin Chinese).", "Our code and data are available at https://github.com/facebookresearch/unlu.", "Of late, large scale pre-trained Transformer-based (Vaswani et al., 2017) modelssuch as RoBERTa (Liu et al., 2019), BART (Lewis et al., 2020), and GPT-2 and -3 (Radford et al., 2019; Brown et al., 2020)have exceeded recurrent neural networks' performance on many NLU tasks (Wang et al., 2018, 2019).", "Several papers have even suggested that Transformers pretrained on a language modeling (LM) objective can capture syntactic informa-Premise Hypothesis PredictedLabel Boats in daily use lie within feet of the fashionable bars and restaurants.", "tion (Hewitt and Manning, 2019; Jawahar et al., 2019; Warstadt and Bowman, 2020; Wu et al., 2020), with their self-attention layers being capable of surprisingly effective learning (Rogers et al., 2020).", "In this work, we question such claims that current models know syntax.", "Since there are many ways to investigate syn-tax, we must be clear on what we mean by the term.", "Knowing the syntax of a sentence means being sensitive to the order of the words in that sentence (among other things).", "Humans are sensitive to word order, so clearly, language is not merely a bag of words (Harris, 1954, p.156).", "Moreover, it is easier for us to identify or recall words presented in canonical orders than in disordered, ungrammatical sentences; this phenomenon is called the sentence superiority effect (Cattell 1886; Scheerer 1981; Toyota 2001; Baddeley et al. 2009; Snell and Grainger 2017, 2019; Wen et al. 2019, i.a.).", "In our estimation then, if one wants to claim that a model knows syntax, then they should minimally show that the model is sensitive to word order (at least for e.g. English or Mandarin Chinese).", "Generally, knowing the syntax of a sentence is taken to be a prerequisite for understanding what that sentence means (Heim and Kratzer, 1998).", "Models should have to know the syntax first then, if performing any particular NLU task that genuinely requires a humanlike understanding of meaning (cf. Bender and Koller 2020).", "Thus, if our models are as good at NLU as our current evaluation methods suggest, we should expect them to be sensitive to word order (see Table 1).", "We find, based on a suite of permutation metrics, that they are not.", "We focus here on textual entailment, one of the hallmark tasks used to measure how well models understand language (Condoravdi et al., 2003; Dagan et al., 2005).", "This task, often also called Natural Language Inference (NLI; Bowman et al. 2015, i.a.), typically consists of two sentences: a premise and a hypothesis.", "The objective is to predict whether the premise entails the hypothesis, contradicts it, or is neutral with respect to it.", "We find rampant word order insensitivity in purportedly high performing NLI models.", "For nearly all premise-hypothesis pairs, there are many permuted examples that fool the models into providing the correct prediction.", "In case of MNLI, for example, the current state-of-the-art of 90.5% can be increased to 98.7 % merely by permuting the word order of test set examples.", "We even find drastically increased cross-dataset generalization when we reorder words.", "This is not just a matter of chancewe show that the model output probabilities are significantly different from uniform.", "We verify our findings with three popular English NLI datasetsSNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018b) and ANLI (Nie et al., 2020))and one Chinese one, OCNLI (Hu et al., 2020a).", "It is thus less likely that our findings result from some quirk of English or a particular to-kenization strategy.", "We also observe the effect for various transformer architectures pre-trained on language modeling (BERT, RoBERTa, DistilBERT), and non-transformers, including a ConvNet, an InferSent model, and a BiLSTM.", "Our contributions are as follows:", "(i) we propose a suite of metrics ( Permutation Acceptance ) for measuring model insensitivity to word order ( 3),", "(ii) we construct multiple permuted test datasets for measuring NLI model performance at a large scale ( 5),", "(iii) we show that NLI models focus on words more than word order, but can partially reconstruct syntactic information from words alone ( 6),", "(iv) we show the problem persists on out-of-domain data,", "(v) we show that humans struggle with UnNatural Language Inference, underscoring the non-humanlikeness of SOTA models ( 7),", "(vi) finally, we explore a simple maximum entropy-based method ( 8) to encourage models not to accept permuted examples.", "Researchers in NLP have realized the importance of syntactic structure in neural networks going back to Tabor (1994).", "An early hand annotation effort on PASCAL RTE (Dagan et al., 2006) suggested that syntactic information alone was sufficient to make a judgment for roughly one third of examples (Vanderwende and Dolan, 2005).", "Anecdotally, large generative language models like GPT-2 or -3 exhibit a seemingly humanlike ability to generate fluent and grammatical text (Goldberg, 2019; Wolf, 2019).", "However, the jury is still out as to whether transformers genuinely acquire syntax.", "Models appear to have acquired syntax.", "When researchers have peeked inside Transformer LM's pretrained representations, familiar syntactic structure (Hewitt and Manning, 2019; Jawahar et al., 2019; Lin et al., 2019; Warstadt and Bowman, 2020; Wu et al., 2020), or a familiar order of linguistic operations (Jawahar et al., 2019; Tenney et al., 2019), has appeared.", "There is also evidence, notably from agreement attraction phenomena (Linzen et al., 2016) that transformer-based models pretrained on LM do acquire some knowledge of natural language syntax (Gulordava et al., 2018; Chrupaa and Alishahi, 2019; Jawahar et al., 2019; Lin et al., 2019; Manning et al., 2020; Hawkins et al., 2020; Linzen and Baroni, 2021).", "Results from other phenomena (Warstadt and Bowman, 2020) such as NPI licensing (Warstadt et al., 2019a) lend additional support.", "The claim that LMs acquire some syntactic knowledge has been made not only for transformers, but also for convolutional neural nets (Bernardy and Lappin, 2017), and RNNs (Gulordava et al., 2018; van Schijndel and Linzen, 2018; Wilcox et al., 2018; Zhang and Bowman, 2018; Prasad et al., 2019; Ravfogel et al., 2019)although there are many caveats (e.g., Ravfogel et al. 2018; White et al. 2018; Davis and van Schijndel 2020; Chaves 2020; Da Costa and Chaves 2020; Kodner and Gupta 2020).", "Models appear to struggle with syntax.", "Several works have cast doubt on the extent to which NLI models in particular know syntax (although each work adopts a slightly different idea of what knowing syntax entails).", "For example, McCoy et al. (2019) argued that the knowledge acquired by models trained on NLI (for at least some popular datasets) is actually not as syntactically sophisticated as it might have initially seemed; some transformer models rely mainly on simpler, non-humanlike heuristics.", "In general, transformer LM performance has been found to be patchy and variable across linguistic phenomena (Dasgupta et al., 2018; Naik et al., 2018; An et al., 2019; Ravichan-der et al., 2019; Jeretic et al., 2020).", "This is especially true for syntactic phenomena (Marvin and Linzen, 2018; Hu et al., 2020b; Gauthier et al., 2020; McCoy et al., 2020; Warstadt et al., 2020), where transformers are, for some phenomena and settings, worse than RNNs (van Schijndel et al., 2019).", "From another angle, many have explored architectural approaches for increasing a network's sensitivity to syntactic structure (Chen et al., 2017; Li et al., 2020).", "Williams et al. (2018a) showed that learning jointly to perform NLI and to parse resulted in parse trees that match no popular syntactic formalisms.", "Furthermore, models trained explicitly to differentiate acceptable sentences from unacceptable ones (i.e., one of the most common syntactic tests used by linguists) have, to date, come nowhere near human performance (Warstadt et al., 2019b).", "Insensitivity to Perturbation.", "Most relatedly, several concurrent works (Pham et al., 2020; Alleman et al., 2021; Gupta et al., 2021; Sinha et al., 2021; Parthasarathi et al., 2021) investigated the effect of word order permutations on transformer NNs.", "Pham et al. (2020) is very nearly a proper subset of our work except for investigating additional tasks (i.e. from the GLUE benchmark of Wang et al. 2018) and performing a by-layer-analysis.", "Gupta et al. (2021) also relies on the GLUE benchmark, but additionally investigates other types of destructive perturbations.", "Our contribution differs from these works in that we additionally include the following: we", "(i) outline theoretically-informed predictions for how models should be expected to react to permuted input (we outline a few options),", "(ii) show that permuting can flip an incorrect prediction to a correct one,", "(iii) show that the problem isn't specific to Transformers,", "(iv) show that the problem persists on out of domain data,", "(v) offer a suite of flexible metrics, and", "(vi) analyze why models might be accepting permutations (BLEU and POS-tag neighborhood analysis).", "Finally, we replicate our findings in another language.", "While our work (and Pham et al.; Gupta et al.) only permutes data during fine-tuning and/or evaluation, recently Sinha et al. explored the sensitivity during pre-training, and found that models trained on n-gram permuted sentences perform remarkably close to regular MLM pre-training.", "In the context of generation, Parthasarathi et al. (2021) crafted linguistically relevant perturbations (on the basis of part-of-speech tagging and dependency parsing) to evaluate whether permutation hinders automatic machine translation models.", "Relatedly, but not for translation, Alleman et al. (2021) investigated a smaller inventory of perturbations with emphasis on phrasal boundaries and the effects of n-gram perturbations on different layers in the network.", "NLI Models are very sensitive to words.", "NLI models often over-attend to particular words to predict the correct answer (Gururangan et al., 2018; Clark et al., 2019).", "Wallace et al. (2019) show that some short sequences of non-human-readable text can fool many NLU models, including NLI models trained on SNLI, into predicting a specific label.", "In fact, Ettinger (2020) observed that for one of three test sets, BERT loses some accuracy in word-perturbed sentences, but that there exists a subset of examples for which BERT's accuracy remains intact.", "If performance isn't affected (or if permutation helps, as we find it does in some cases), it suggests that these state-of-the-art models actually perform somewhat similarly to bag-of-words models (Blei et al., 2003; Mikolov et al., 2013).", "As we mentioned, linguists generally take syntactic structure to be necessary for humans to know what sentences mean.", "Many also find the NLI task to a very promising approximation of human natural language understanding, in part because it is rooted in the tradition of logical entailment.", "In the spirit of propositional logic, sentence meaning is taken to be truth-conditional (Frege, 1948; Montague, 1970; Chierchia and McConnell-Ginet, 1990; Heim and Kratzer, 1998).", "That is to say that understanding a sentence is equivalent to knowing the actual conditions of the world under which the sentences would be (judged) true (Wittgenstein, 1922).", "If grammatical sentences are required for sentential inference, as per a truth conditional approach (Montague, 1970), then permuted sentences should be meaningless.", "Put another way, the meanings of highly permuted sentences (if they exist) are not propositions, and thus those sentences don't have truth conditions.", "Only from their truth conditions of sentences can we tell if a sentence entails another.", "In short, the textual entailment task is technically undefined in our unnatural setting.", "Since existing definitions don't immediately extend to UnNatural Language Inference (UNLI), we outline several hypothetical systematic ways that a model might perform, had it been sensitive to word order.", "We hypothesize two models that operate on the first principles of NLI, and one that doesn't.", "In the first case, Model A deems permuted sentences meaningless (devoid of truth values), as formal semantic theories of human language would predict.", "Thus, it assigns neutral to every permuted example.", "Next, Model B does not deem permuted sentences meaningless, and attempts to understand them.", "Humans find understanding permuted sentences difficult (see our human evaluations in 7).", "Model B could also similarly struggle to decipher the meaning, and just equally sample labels for each example (i.e., assigns equal probability mass to the outcome of each label).", "Finally, we hypothesize a non-systematic model, Model C, which attempts to treat permuted sentences as though they weren't permuted at all.", "This model could operate similarly as bag-of-words (BOW), and thus always assign the same label to the permuted examples as it would to the un-permuted examples.", "If the model failed to assign the original gold label to the original unpermuted examples, it will also fail to assign the original gold label to its permutations; it will never get higher accuracy on permuted examples than on unpermuted ones.", "We find in our experiments that the state-of-the-art Transformer-based NLI models (as well as pre-Transformer class of models) do not perform like any of the above hypothetical models.", "They perform closest to Model C, but are, in some cases, actually able to achieve higher accuracy on permuted examples.", "To better quantitatively describe this behaviour, we introduce our suite of Permutation Acceptance metrics that enable us to quantify how accepting models are of permuted sentences.", "Constructing the permuted dataset.", "For a given dataset D having splits D train and D test , we first train an NLI model M on D train to achieve comparable accuracy to what was reported in the original papers.", "We then construct a randomized version of D test , which we term as D test such that: for each example ( p i , h i , y i ) D test (where p i and h i are the premise and hypothesis sentences of the example respectively and y i is the gold label), we use a permutation operator F that returns a list ( P i , H i ) of q permuted sentences ( p i and h i ), where q is a hyperparameter.", "F essentially permutes all positions of the words in a given sentence (i.e., either in premise or hypothesis) with the restriction that no words maintain their original position .", "In our initial setting, we do not explicitly control the placement of the words relative to their original neighbors, but we analyze clumping effects in 5.", "D test now consists of | D test | q examples, with q different permutations of hypothesis and premise for each original test example pair.", "If a sentence S (e.g., h i ) contains w words, then the total number of available permutations of S are ( w 1)!", ", thus making the output of F a list of (cid:0) ( w 1)! q (cid:1) permutations in this case.", "For us, the space of possible outputs is larger, since we permute p i and h i separately (and ignore examples for which any | S | 5 ).", "Defining Permutation Acceptance.", "The choice of q naturally allows us to analyze a statistical view of the predictability of a model on the permuted sentences.", "To that end, we define the following notational conventions.", "Let A be the original accuracy of a given model M on a dataset D , and c be the number of examples in a dataset which are marked as correct according to the standard formulation of accuracy for the original dataset (i.e., they are assigned the ground truth label).", "Typically A is given by c | D test | or c | D dev | .", "To get an overall summary score, we let x be the percentage of examples ( p i , h i ) D test for which Pr M ( P i , H i ) cor exceeds a predetermined threshold 0 < x < 1 .", "Concretely, a given example will count 0.5 0.33 0.66 0.0 1.0 0.66 0.5 0.33 0.66 0.0 1.0 0.66 max rand 0.5 0.33 0.66 0.0 1.0 0.66 1.0 f c 0.83 O r i g i n a ll y c o rr e c t O r i g i n a ll y i n c o rr e c t E x a m p l e s Permutations 0.66 0.16 0.55 0.50 D test Figure 1: Graphical representation of the Permutation Acceptance class of metrics.", "as correct according to x if more than x percent of its permutations ( P i and H i ) are assigned y i by the model M .", "Mathematically, x = 1 | D test | (cid:88) ( p i ,h i ) D test ((Pr M ( P i , H i ) cor > x ) 1) .", "(2) There are two specific cases of x that we are most interested in.", "First, we define max or the Maximum Accuracy , where x = 1 / | D test | .", "In short, max gives the percentage of examples ( p i , h i ) D test for which there is at least one permutation ( p j , h j ) that model M assigns the gold label y i 1 .", "Second, we define rand , or Random Baseline Accuracy , where x = 1 /m or chance probability (for balanced m -way classification, where m = 3 in NLI).", "This metric is less stringent than max , as it counts an example if at least one third of its permutations are assigned the gold label (hence provides a lower-bound relaxation).", "See Figure 1 for a graphical representation of x .", "We also define D f to be the list of examples originally marked incorrect according to A , but are now deemed correct according max .", "D c is the list of examples originally marked correct according to A .", "Thus, we should expect D f < D c for models that have high accuracy.", "Additionally, we define P c and P f , as the dataset average percentage of permutations which predicted the gold label, when the 1 Theoretically, max 1 if the number of permutations q is large.", "examples were originally correct ( D c ) and when the examples were originally incorrect ( D f ) as per A (hence, flipped) respectively.", "P f is defined similarly by replacing D c by D f .", "Note that for a classic BOW model, P c = 100 and P f = 0 , because it would rely on the words alone (not their order) to make its classification decision.", "Since permuting removes no words, BOW models should come to the same decisions for permuted examples as for the originals.", "We present results for two types of models:", "(a) Transformer-based models and", "(b) Non-Transformer Models.", "In", "(a) , we investigate the state-of-the-art pre-trained models such as RoBERTa-Large (Liu et al., 2019), BART-Large (Lewis et al., 2020) and DistilBERT (Sanh et al., 2020).", "For", "(b) we consider several recurrent and convolution based neural networks, such as InferSent (Conneau et al., 2017), Bidirectional LSTM (Collobert and Weston, 2008) and ConvNet (Zhao et al., 2015).", "We train all models on MNLI, and evaluate on in-distribution (SNLI and MNLI) and out-of-distribution datasets (ANLI).", "We independently verify results of", "(a) using both our fine-tuned model using HuggingFace Transformers (Wolf et al., 2020) and pre-trained checkpoints from FairSeq (Ott et al., 2019) (using PyTorch Model Hub).", "For", "(b) , we use the InferSent code-base.", "We sample q = 100 permutations for each example in D test , and use 100 seeds for each of those permutations to ensure full reproducibility.", "We drop examples from test sets where we are unable to compute all unique randomizations, typically these are examples with sentences of length of less than 6 tokens.", "2 Models accept many permuted examples.", "We find max is very high for models trained and evaluated on MNLI (in-domain generalization), reaching 98.7% on MNLI dev.", "and test sets (in RoBERTa, compared to A of 90.6% (Table 2).", "Recall, human accuracy is approximately 92% on MNLI", "dev., Nangia and Bowman 2019).", "This shows that there exists at least one permutation (usually many more) 2 Code, data, and model checkpoints will be available at https://github.com/facebookresearch/unlu.", "for almost all examples in D test such that model M predicts the gold label.", "We also observe high rand at 79.4%, showing that there are many examples for which the models outperform even a random baseline in accepting permuted sentences (see Appendix E for more values.) Evaluating out-of-domain generalization with ANLI dataset splits resulted in an max value that is notably higher than A (89.7% max for RoBERTa compared to 45.6% A ).", "As a consequence, we encounter many flips , i.e., examples where the model is unable to predict the gold label, but at least one permutation of that example is able to.", "However, recall this analysis expects us to know the gold label upfront, so this test can be thought of as running a word-order probe test on the model until the model predicts the gold label (or give up by exhausting our set of q permutations).", "For out-of-domain generalization, rand decreases considerably (36.4% rand on A1), which means fewer permutations are accepted by the model.", "Next, recall that a classic bag-of-words model would have P c = 100 and P f = 0 .", "No model performs strictly like a classic bag of words although they do perform somewhat BOW-like ( P c >> P f for all test splits, Figure 5).", "We find this BOW-likeness to be higher for certain non-Transformer models, (InferSent) as they exhibit higher P c (84.2% for InferSent compared to 70.7% for RoBERTa on MNLI).", "we observe would be of less concern if the correct label prediction was just an outcome of chance,", "which could occur when the entropy of the log probabilities of the model output is high (suggest-ing uniform probabilities on entailment, neutral and contradiction labels, recall Model B from 3).", "We first investigate the model probabilities for the Transformer-based models on the permutations that lead to the correct answer in Figure 2.", "We find overwhelming evidence that model confidences on in-distribution datasets (MNLI, SNLI) are highly skewed, resulting in low entropy, and it varies among different model types.", "BART proves to be the most skewed Transformer-based model.", "This skewness is not a property of model capacity, as we observe DistilBERT log probabilities to have similar skewness as RoBERTa (large) model, while exhibiting lower A , max , and rand .", "For non-Transformers whose accuracy A is lower, the max achieved by these models are also predictably lower.", "We observe roughly the same relative performance in the terms of max (Figure 5 and Appendix Table 2) and Average entropy (Fig-ure 2).", "However, while comparing the averaged entropy of the model predictions, it is clear that there is some benefit to being a worse modelnon-Transformer models are not as overconfident on randomized sentences as Transformers are.", "High confidence of Transformer models can be attributed to the overthinking phenomenon commonly observed in deep neural networks (Kaya et al., 2019) and BERT-based models (Zhou et al., 2020).", "Similar artifacts in Chinese NLU.", "We extended the experiments to the Original Chinese NLI dataset (Hu et al., 2020a, OCNLI), and reused the pre-trained RoBERTa-Large and InferSent (non-Transformer) models on OCNLI.", "Our findings are similar to the English results (Table 3), thereby suggesting that the phenomenon is not just an artifact of English text or tokenization.", "Other Results.", "We investigated the effect of sentence length (which correlates with number of possible permutations; Appendix A), and hypothesis-only randomization (models exhibit similar phenomenon even when only hypothesis is permuted; Appendix C).", "A natural question to ask following our findings: what is it about particular permutations that leads models to accept them?", "Since the permutation oper-Figure 3: BLEU-2 score versus acceptability of permuted sentences across all test datasets.", "RoBERTa and BART performance is similar but differs considerably from the performance of non-Transformer-based models, such as InferSent and ConvNet.", "ation is drastic and only rarely preserves local word relations, we first investigate whether there exists a relationship between Permutation Acceptance scores and local word order preservation.", "Concretely, we compare bi-gram word overlap (BLEU-2) with the percentage of permutations that are deemed correct (Figure 3).", "3 Although the probability of a permuted sentence to be predicted correctly does appear to track BLEU-2 score (Figure 3), the percentage of examples which were assigned the gold label by the Transformer-based models is still higher than we would expect from permutations with lower BLEU-2 (66% for the lowest BLEU-2 range of 0 0 . 15 ), suggesting preserved relative word order alone cannot explain the high permutation acceptance rates.", "Thus, we find that local order preservation does correlate with Permutation Acceptance, but it doesn't fully explain the high Permutation Acceptance scores.", "We now further ask whether is related to a more abstract measure of local word relations, i.e., part-of-speech (POS) neighborhood.", "Many syntactic formalisms, like Lexical Functional Grammar (Kaplan and Bresnan, 1995; Bres-nan et al., 2015, LFG), Head-drive Phrase Structure Grammar (Pollard and Sag, 1994, HPSG) or Lexicalized Tree Adjoining Grammar (Schabes et al., 1988; Abeille, 1990, LTAG), are lexicalized, i.e., 3 We observe, due to our permutation process, the maximum BLEU-3 and BLEU-4 scores are negligibly low ( < 0 . 2 BLEU-3 and < 0 . 1 BLEU-4), already calling into question the hypothesis that n-grams are the sole explanation for our finding.", "Because of this, we only compare BLEU-2 scores.", "Detailed experiments on specially constructed permutations that cover the entire range of BLEU-3 and BLEU-4 is provided in Appendix D. individual words or morphemes bear syntactic features telling us which other words they can combine with.", "For example, buy could be associated with (at least) two lexicalized syntactic structures, one containing two noun phrases (as in Kim bought cheese ), and another with three (as in Lee bought Logan cheese ).", "We speculate that our NLI models might accept permuted examples at high rates, because they are (perhaps noisily) reconstructing the original sentence from abstract, word-anchored information about common neighbors.", "To test this, we POS-tagged D train using 17 Universal Part-of-Speech tags (using spaCy, Honni-bal et al. 2020).", "For each w i S i , we compute the occurrence probability of POS tags on tokens in the neighborhood of w i .", "The neighborhood is specified by the radius r (a symmetrical window r tokens from w i S i to the left and right).", "We denote this sentence level probability of neighbor POS tags for a word w i as r { w i ,S i } R 17 (see an example in Figure 7 in the Appendix).", "Sentence-level word POS neighbor scores can be averaged across D train to get a type level score r { w i ,D train } R 17 , w i D train .", "Then, for a sentence S i D test , for each word w i S i , we compute a POS mini-tree overlap score : k { w i ,S i } = 1 k | argmax k r { w i ,D train } argmax k r { w i ,S i } | (4) Concretely, k { w i ,S i } computes the overlap of top-k POS tags in the neighborhood of a word w i in S with that of the train statistic.", "If a word has the same mini-tree in a given sentence as it has in the training set, then the overlap would be 1.", "For a given sentence S i , the aggregate k { S i } is defined by the average of the overlap scores of all its words: k { S i } = 1 | S i | (cid:80) w i S i k { w i ,S i } , and we call it a POS minitree signature .", "We can also compute the POS minitree signature of a permuted sentence S i to have k { S i } .", "If the permuted sentence POS signature comes close to that of the true sentence, then their ratio (i.e., k { S i } / k { S i } ) will be close to 1.", "Also, since POS signature is computed with respect to the train distribution, a ratio of > 1 indicates that the permuted sentence is closer to the overall train statistic than to the original unpermuted sentence in terms of POS signature.", "If high overlap with the training distribution correlates with percentage of permutations deemed correct, then our models treat words as if they project syntactic minitrees.", "We investigate the relationship with percentage of permuted sentences accepted with k { S i } / k { S i } in Figure 4.", "We observe that the POS Tag Minitree hypothesis holds for Transformer-based models, RoBERTa, BART and DistilBERT, where the percentage of accepted pairs increase as the sentences have higher overlap with the un-permuted sentence in terms of POS signature.", "For non-Transformer models such as InferSent, ConvNet, and BiLSTM models, the POS signature ratio to percentage of correct permutation remains the same or decreases, suggesting that the reasoning process employed by these models does not preserve local abstract syntax structure (i.e., POS neighbor relations).", "We expect humans to struggle with UNLI, given our intuitions and the sentence superiority findings (but see Mollica et al. 2020).", "To test this, we presented two experts in NLI (one a linguist) with permuted sentence pairs to label.", "4 Concretely, we draw equal number of examples from MNLI Matched dev set (100 examples where RoBERTa predicts the gold label, D c and 100 examples where it fails to do so, D f ), and then permute these examples using F .", "The experts were given no additional information (recall that it is common knowledge that NLI is a roughly balanced 3-way classification task).", "Unbeknownst to the experts, all permuted sentences in the sample were actually accepted by the RoBERTa (large) model (trained on MNLI dataset).", "We observe that the experts performed 4 Concurrent work by Gupta et al. (2021) found that untrained crowdworkers accept NLI examples that have been subjected to different kinds of perturbations at roughly most frequent class levelsi.e., only 35% of the time.", "much worse than RoBERTa (Table 4), although their accuracy was a bit higher than random.", "We also find that for both experts, accuracy on permutations from D c was higher than on D f , which verifies findings that showed high word overlap can give hints about the ground truth label (Dasgupta et al., 2018; Poliak et al., 2018; Gururangan et al., 2018; Naik et al., 2019).", "We propose an initial attempt to mitigate the effect of correct prediction on permuted examples.", "As we observe in 5, model entropy on permuted examples is significantly lower than expected.", "Neural networks tend to output higher confidence than random for even unknown inputs (Gandhi and Lake, 2020), which might be an underlying cause of the high Permutation Acceptance.", "An ideal model would be ambivalent about randomized ungrammatical sentences.", "Thus, we train NLI models baking in the principle of mutual exclusivity (Gandhi and Lake, 2020) by maximizing model entropy.", "Concretely, we fine-tune RoBERTa on MNLI while maximizing the entropy ( H ) on a subset of n randomized examples ( ( p i , r i ), for each example ( p, h ) in MNLI.", "We modify the loss function as follows: (5) L = argmin (cid:88) (( p,h ) ,y ) y log( p ( y | ( p, h ); )) + n (cid:88) i =1 H (cid:16) y | ( p i , h i ); (cid:17) Using this maximum entropy method ( n = 1 ), we find that the model improves considerably with respect to its robustness to randomized sentences, all while taking no hit to accuracy (Table 5).", "We observe that no model reaches a max score close to 0, suggesting further room to explore other methods for decreasing models' Permutation Acceptance.", "Similar approaches have also proven useful (Gupta et al., 2021) for other tasks as well.", "We show that state-of-the-art models do not rely on sentence structure the way we think they should: NLI models (Transformer-based models, RNNs, and ConvNets) are largely insensitive to permutations of word order that corrupt the original syntax.", "We also show that reordering words can cause models to flip classification labels.", "We do find that models seem to have learned some syntactic information as is evidenced by a correlation between preservation of abstract POS neighborhood information and rate of acceptance by models, but these results do not discount the high rates of Permutation Acceptance, and require further verification.", "Coupled with the finding that humans cannot perform UNLI at all well, the high rate of permutation acceptance that we observe leads us to conclude that current models do not yet know syntax in the fully systematic and humanlike way we would like them to.", "A few years ago, Manning (2015) encouraged NLP to consider the details of human language, how it is learned, processed, and how it changes, rather than just chasing state-of-the-art numbers on a benchmark task.", "We expand upon this view, and suggest one particular future direction: we should train models not only to do well on clean test data, but also to not to overgeneralize to corrupted input.", "Thanks to Omar Agha, Dzmitry Bahdanau, Sam Bowman, Hagen Blix, Ryan Cotterell, Emily Dinan, Michal Drozdal, Charlie Lovering, Nikita Nangia, Alicia Parrish, Grusha Prasad, Roy Schwartz, Shagun Sodhani, Anna Szabolsci, Alex Warstadt, Jackie Chi-kit Cheung, Timothy O'Donnell and members of McGill MCQLL lab for many invaluable comments and feedback on early drafts." ]
[ "abstain", "objective", "objective", "objective", "result", "abstain", "result", "other", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "result", "result", "result", "objective", "objective", "method", "result", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "abstain", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "result", "result", "result", "result", "abstain", "method", "other" ]
[ "Lifelong learning (LL) aims to train a neural network on a stream of tasks while retaining knowledge from previous tasks.", "However, many prior attempts in NLP still suffer from the catastrophic forgetting issue, where the model completely forgets what it just learned in the previous tasks.", "In this paper, we introduce Rational LAMOL, a novel end-to-end LL framework for language models.", "In order to alleviate catastrophic forgetting, Rational LAMOL enhances LAMOL, a recent LL model, by applying critical freezing guided by human rationales.", "When the human rationales are not available, we propose exploiting unsupervised generated rationales as substitutions.", "In the experiment, we tested Rational LAMOL on permutations of three datasets from the ERASER benchmark.", "The results show that our proposed framework outperformed vanilla LAMOL on most permutations.", "Furthermore, unsupervised rationale generation was able to consistently improve the overall LL performance from the baseline without relying on human-annotated rationales.", "We made our code publicly available at https://github.", "com/kanwatchara-k/r_lamol .", "The grounds of lifelong learning (LL) stem from the ability of humans to continually acquire, consolidate, and transfer knowledge and skills throughout their lifespan.", "This ability is also important for real-world natural language processing (NLP) applications, where autonomous agents are required to interact with users from various domains through continuous streams of information and language Equal contributions Corresponding author semantic drifts occur over time.", "The existing dominant paradigm for machine learning, however, is isolated learning (Chen and Liu, 2016).", "While isolated learning has shown some successes in a variety of domains, their applicability remains limited to the assumption that all samples are available during the learning phase.", "When a stream of tasks are trained sequentially, machine learning and neural network models face catastrophic forgetting or interference (McCloskey and Cohen, 1989).", "This occurs due to the non-stationary data distribution that biases the model.", "We focus on lifelong language learning (LLL), which is lifelong learning on a stream of NLP tasks.", "To the best of our knowledge, the grounds of LLL are left largely underexplored.", "LAMOL is an LLL general framework that has garnered recent interest due to its simplicity (Sun et al., 2020).", "In particular, LAMOL transforms all NLP tasks into the question answering (QA) format according to McCann et al. (2018) and generates pseudo-samples of old tasks using its language modeling (LM) capability to refresh the learned knowledge.", "However, there is still a gap between the performance of LAMOL and the result of multi-task learning which is generally considered as the upper bound of LLL performance.", "This indicates that only pseudo-samples generation may not be sufficient to prevent catastrophic forgetting.", "In this paper, we improve existing LLL strategies by proposing Rational LAMOL, a rationale-based lifelong learning framework which equips the original LAMOL with critical freezing (Nguyen et al., 2020) to further prevent catastrophic forgetting.", "Particularly, we devise an algorithm to identify critical components in transformer-based language models using rationales, and the selected components will be frozen to maintain learned knowledge while being trained on a new task.", "The contributions of our paper are listed below: We demonstrate the importance of freezing plastic components (i.e., components that are most susceptible to change) in transformer-based models to strengthen memories of the previously learned tasks in the LLL setting.", "We propose critical component identification algorithm which analyzes the transformer-based LLL model with rationales so as to find the most plastic component to freeze.", "This step is so called critical freezing, firstly devised in computer vision (Nguyen et al., 2020) but we adapted it to NLP.", "We propose that unsupervised generated rationales by InvRat (Chang et al., 2020) can be effectively used as substitutions of human rationales, allowing our framework to be applied to generic NLP datasets.", "We evaluated Rational LAMOL on six task order permutations of three datasets from the ERASER benchmark (DeYoung et al., 2020).", "The results show that our proposed framework outperformed the original LAMOL on five out of the six permutations, achieving average improvements of 1.83% with a lower standard deviation of 4.57%.", "Moreover, using unsupervised rationale generation instead of human rationales also yielded competitive performance, achieving average improvements of 2.67% from original LAMOL.", "In this section, we briefly introduce the concept of lifelong learning, catastrophic forgetting, and component freezing which are relevant to the core idea of Rational LAMOL.", "We also briefly summarize prominent researches related to rationales.", "While people fine tune a pre-trained model to perform a single task, lifelong learning (LL) is a setting in which a learner performs sequential learning of infinitely incoming tasks = { 1 , 2 , ..., i , ..., } , where i is the i -th task to learn at a particular point in time.", "The objective of the LL learner is to ideally both optimize the performance on the new task and maintain optimal performance on previous tasks t for t = 0 , 1 , ..., i .", "Moreover, the ability to transfer knowledge across different tasks is also desired.", "However, naively training on a sequence of tasks without accounting for the difference in data distributions would result in an abrupt decrease in old tasks performance.", "This phenomenon is known as Catastrophic Forgetting (McCloskey and Cohen, 1989).", "There are multiple existing works that aim to mitigate catastrophic forgetting in LL.", "They can be categorized into three major approaches.", "First, regularization methods use a regularization term to constrain changes when updating weights in a new task (Kirkpatrick et al., 2017; Aljundi et al., 2017; Lee et al., 2017).", "Second, data-based methods disallow significant deviation of weights from previous tasks by keeping a small subset of data from the previous tasks or generating pseudo-data to refresh the learned knowledge (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019; de Masson d'Autume et al., 2019; Li and Hoiem, 2018).", "Third, architecture-based methods dynamically transform the neural network architectures in order to accommodate new knowledge (Rusu et al., 2016; Chen et al., 2016).", "Lifelong Language Learning or LLL is a scenario where a model sequentially learns from a stream of NLP tasks in an LL manner.", "To the best of our knowledge, LLL has rarely been studied and previous works usually target a single type of NLP tasks (Chen et al., 2015; Liu et al., 2019; de Masson d'Autume et al., 2019).", "To go beyond this limitation, Sun et al. (2020) proposed LAMOL, a learning framework that utilizes a language model to simultaneously predict outputs and learn to generate pseudo-training examples, which are exploited to alleviate catastrophic forgetting.", "Hence, LAMOL, as well as our Rational LAMOL, naturally falls into the data-based LL approach since data from previous tasks, albeit generated, is utilized to constrain a model.", "Component Freezing While component freezing is also a common practice in the fine-tuning process, it is done to prevent loss in general knowledge in lower layers of the model (Raganato and Tiedemann, 2018).", "By contrast, many architecture-based LL methods, for example Rusu et al. (2016), utilize component freezing to prevent changes to learned knowledge from previous tasks and enlarge the model to accommodate new tasks, thereby making the model immune to forgetting.", "Our Rational LAMOL also uses component freezing, but unlike architecture-based methods, only a small part of the model is frozen and its size is constant throughout the learning process.", "Rationales Rationales are reasons for labels or predictions.", "In NLP, they are usually parts of the input texts which support or contribute to the class labels.", "Rationales could be either annotated by humans or generated by machine learning models.", "Human rationales have been used to enhance machine learning in multiple studies.", "For instance, Rajani et al. (2019) used the rationales to guide a neural network toward better reasoning.", "Bao et al. (2018) utilized rationales as auxiliary information to train a neural network model, reducing training examples required to achieve good results.", "Recently, DeYoung et al. (2020) introduced the ERASER benchmark consisting of multiple datasets, all of which are annotated with human rationales.", "This facilitates the advancement of research on interpretable NLP.", "In the experiment, we used human rationales from ERASER in the critical component identification step to find the most plastic component to be frozen.", "Meanwhile, some researchers attempt to design architectures to predict rationales from labelled data.", "Existing rationalization techniques commonly use the maximum mutual information (MMI) criterion to select rationales, which is prone to choosing spurious correlation between input features and outputs as rationales (Lei et al., 2016; Yu et al., 2019).", "To fix this issue, Invariant Rationalization (InvRat) (Chang et al., 2020) follows the invariant risk minimization (IRM) paradigm, as introduced by Arjovsky et al. (2019).", "It utilizes the environment variable to isolate and select the causal features that faithfully explain the output.", "In order to allow Rational LAMOL to be applied to any NLP dataset, we choose to leverage InvRat to automatically produce rationales due to its supe-rior performance and straightforward application, removing the need for human rationales.", "We introduce Rational LAMOL and its detailed implementation in this section.", "As Rational LAMOL is based from LAMOL (Sun et al., 2020), we briefly explain LAMOL in Section 3.1.", "Then we introduce the core lifelong learning framework of Rational LAMOL in Section 3.2.", "This is followed by two proposed enhancements including critical component identification and unsupervised rationale generation , detailed in Section 3.3 and 3.4, respectively.", "Language Modeling for Lifelong Language Learning (LAMOL) (Sun et al., 2020) utilizes a single language model (LM) as a multipurpose model.", "Framing all tasks as question answering (QA), the LM now poses as a generic task-agnostic model.", "In addition, LAMOL trains the LM as a generative model upon receiving a special generation token.", "Using a single model for both providing answers and generating pseudo-samples, LAMOL truly exhibits a model of LM and QA duality.", "The benefit that comes with the generative part of the model tackles the long-standing issue of LL catastrophic forgetting.", "While other methods make use of extra memory or model capacity to preserve a subset of real samples (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019) or to accomodate a separate generator (Shin et al., 2017; Kemker and Kanan, 2017), LAMOL transfers all the responsibilities into a single model.", "It learns the ability to select potentially prominent features befitting learning by modeling the input.", "This allows the model to replay meaningful pseudo-samples from previous tasks while forcing the model to memorize knowledge acquired from previous tasks tied to the generation token.", "In this paper, we propose exploiting rationales with LAMOL to further improve the LLL performance, discussed next.", "Rational LAMOL, illustrated in Figure 1 (right), is a learning framework revolving around the original methodologies of LAMOL.", "We consider an LL setting where = { 1 , 2 , ..., i , ... } is a stream of learning tasks and i is the i -th task to train at a particular point in time.", "Let M i denote the model M after being trained for task i , where M 0 is the initialized pre-trained model.", "Using these notations and starting from M 0 , Rational LAMOL works iteratively in four steps as follows.", "First, given a model M i , it trains M i with the task i +1 using LAMOL's training procedure to obtain M i +1 .", "Second, for i > 0 , it applies critical component identification, which is described in Section 3.3, on M i and M i +1 with the rationales of task i to dissect the most plastic layers or blocks.", "Third, we take a step back to work at M i and apply critical freezing, i.e., freezing the most plastic components, Figure 1: Left : The overview of LAMOL.", "to obtain M CFi .", "Lastly, we train M CFi through the task i +1 again to get a new model M i +1 that retains the most plastic memories.", "Note that despite the unique nature of LAMOL, our Rational LAMOL does not limit its usage to a single model architecture.", "It has potential applications to general attention-based models suffering from catastrophic forgetting through domain shifts across tasks.", "We propose the Critical Component Identification (CCI) algorithm, pointing out the most plastic block of our transformer-based LL model before moving on to a new task completely.", "(This shares the same spirit as Nguyen et al. (2020), proposing Auto DeepVis to find the most plastic blocks of CNN models for image", "classification.) The chosen block is the one that forgets what it has learned from the recent task the most when being introduced a new task, so we will freeze the block to prevent catastrophic forgetting in Rational LAMOL.", "As shown in Algorithm 1, for each validation sample x X of task i , the CCI compares the attention maps AT produced by the model M i (i.e., the old model MO in Algorithm 1) and M i +1 (i.e., the new model MN in Algorithm 1) to find the most plastic block b with respect to this sample.", "Then it returns the block F which is the mode of all b , voted by most of the samples in X .", "Note that most of the variable names are preserved similar to Nguyen et al. (2020) for ease of reference, and some sections are refactored for readability.", "In particular, to find b for the sample x , we iterate over all blocks j = 1 , ..., K and perform two steps.", "First, we find the representative map of the block j in MO with respect to the ground truth GT (i.e., RMMO ,GT ( j ) ) by selecting the attention map of the attention head a and the token s in x from the block j that is most similar to the human rationale for the sample x (i.e., ground truth GT in Algorithm 1).", "Although interpretable NLP stands to be a nascent subfield for exploration (DeYoung et al., 2020), elementary visualization of attentions are possible in Transformers (Vig, 2019; Hoover et al., 2020).", "These self-attention mechanisms associate distant positions of a single sequence and many appear to exhibit behavior related to the sentences' syntactic and semantic structure (Vaswani et al., 2017).", "We hypothesize that the semantic nature of the self-attention mechanisms would opt for tokens most relating to positive evidence vital for predictions, being analogous to rationales snippets that Algorithm 1 Critical Component Identification Input: Validation set X , ground truth rationale GT , old model MO , new model MN , number of blocks K Output: Critical block F for all validation sample x X do : IoUs ATO , ATN [ MO ( x ) , MN ( x )] for j = 1 , K do : RMMO ,GT AT j,a ,s with highest IoU MO ,GT RMMN ,M O AT j,a ,s with highest IoU MN ,M OAPPEND (IoUs, max( IoU MN ,M O )) end for b arg min j IoUs APPEND (, b) end for F = MODE () return F support outputs.", "To compute the similarity between attention maps and human rationales, we use Intersection over Union (IoU).", "Formally, the following equations explain this step.", "RM M,GT ( j ) = AT j,a ,s (1) where ( a , s ) = arg max a A,s S ( IoU M,GT ( j, a, s )) (2) and IoU M,GT ( j, a, s ) = P ( AT j,a,s ) GT P ( AT j,a,s ) GT (3) A is the set of all attention heads in the block, and S is the set of all tokens in x .", "IoU M,GT ( j, a, s ) reflects the similarity between the ground truth and the attention map of the block j , head a , and token s in x .", "Since the ground truth contains binary labels indicating whether a token is a part of the rationale or not, we need to convert the attention map AT j,a,s into binary labels using P a simple binary thresholding which returns 1 for the value greater than the -th percentile on the entire sequence (otherwise, 0).", "This is required as IoU works for comparing two binary masks.", "Figure 2 visualizes how to compute the IoU score by drilling down each component of the model.", "After we obtain RMMO ,GT ( j ) of the block j , the second step finds the representative map of the block j in MN with respect to MO (i.e., RMMN ,M O ( j ) ).", "This can be done by replacing M and GT in Equation 1-3 by MN and MO , respectively, and replacing GT on the right side of Equation 3 to be P ( RMMO ,GT ( j )) .", "After that, we collect the maximum IoU MN ,M O of the block j which represents the amount of knowledge of task i held in the model after we introduce task i + 1 .", "Therefore, the most plastic block b for this sample x is the block with the lowest maximum IoU MN ,M O .", "Actually, transformer blocks are not the finest granularity that we could freeze.", "Since each block contains several attention heads, it is possible to freeze some attention heads individually.", "Hence, we propose another algorithm, applying to heads.", "This is similar to Algorithm 1, but instead of searching for blocks with lowest maximum IoU, the algorithm searches using both the attention blocks and attention heads together as keys.", "Although the definition of IoU stays the same, the definition of the representative map will be at a higher granularity.", "Formally, for a block index j and attention head a , RM M,GT will be computed as: RM M,GT ( j, a ) = AT j,a,s (4) where ( s ) = arg max s S ( IoU M,GT ( j, a, s )) (5) and we can freeze top n heads that receives most votes from the samples in the validation set X .", "As described in Section 3.2, our framework requires rationales as an input.", "However, most existing NLP datasets are not annotated with rationales.", "To overcome the limitation, we leverage a recent unsupervised rationale generation framework, InvRat (Chang et al., 2020) to generate rationales as substitutions.", "Originally, InvRat was designed for single-input tasks such as sentiment analysis.", "However, since some of the datasets we experimented with are text-pair classification, we append the query (or question) at the end of each sample to accommodate these tasks.", "To evaluate our proposed framework, we conducted an experiment on three English text classification datasets, curated and made publicly available by ERASER 1 (DeYoung et al., 2020).", "All of the three datasets, as listed below, are provided with rationales marked by humans.", "Table 1 contains a summary of the datasets, dataset sizes, and metrics.", "BoolQ (Clark et al., 2019): a dataset comprises selected passages from Wikipedia and naturally occurring yes/no questions to be answered by the model.", "Movie Reviews (Zaidan and Eisner, 2008): a dataset composed of movie reviews.", "It contains positive and negative sentiment labels to be predicted by the model.", "SciFact (Wadden et al., 2020): a dataset containing expert-written scientific claims coupled with evidence-containing abstracts.", "Given a claim, the model has to identify if the abstract supports or refutes the claim.", "We ran our proposed framework on all six permutations of task order for three times with different random seeds.", "The average results are then reported in Section 5.", "We followed the best LAMOL configuration from Sun et al. (2020).", "All parameters were kept at the default values.", "For all methods, we use the small GPT-2 model (Radford et al., 2019) as the language model.", "Each task was trained for five epochs.", "We applied greedy decoding during inference.", "Due to fine-tuning instability of neural network, in each task order, we used the same first task model M 1 for all methods in each run for fair comparison.", "Critical freezing was applied to a model with two different levels of granularity: block level and head level.", "The validation set of each task was used as input to Algorithm 1.", "For block level granularity, we chose to freeze the most frequent block obtained from the algorithm, while for head level granularity, 12 heads chosen returned by the algorithm were kept frozen during training.", "We used = 80 , i.e., selecting the top 20 percentile of attention scores to compare with ground truth rationales.", "As the ERASER benchmark has an average ratio of rationale tokens to document tokens of around 9 .", "4% , we allowed rationale selection to be two times the average ratio (i.e., 20%).", "For InvRat, we opted for 300-dimensional GloVe embeddings (Pennington et al., 2014).", "The generator and the predictor modules of InvRat were based on 1-layer bidirectional gated recurrent units (Chung et al., 2014) with 256 hidden units as in Chang et al. (2020).", "Maximum model input was set to 1,024 tokens.", "All hyperparameters for each task were tuned on the validation set.", "This section reports the performance of Rationale LAMOL and compares it with LAMOL as the baseline as well as multitask learning, which is considered as the upper bound of LL.", "We also analyze the effect of each component in the proposed framework.", "In order to validate if component freezing truly helps reduce catastrophic forgetting, we performed partial brute force block-level freezing on each task permutation to approximately determine the upper", "bound of our Rationale LAMOL block .", "Due to limited computing resources, we compromised with searching for all even-numbered block indices, and choosing the model with maximum average score of the first two tasks to do the brute force on the latter two tasks.", "Since brute force was performed on a per-task basis, our search space would be 6+6, the first six being the six blocks on the first two tasks, and the latter six being the six blocks on the last two tasks.", "Do note that true brute force would be 12 12.", "Although it is possible that our partial brute force is sub-optimal, we find that it is a good compromise due to limited computing resources.", "The results are presented in Table", "2. Brute force was able to outperform vanilla LAMOL by a substantial margin of 3.68%, only 1.36% from the multitask upper bound.", "This suggests that component freezing is able to further nullify the effect of catastrophic forgetting from LAMOL.", "It also achieved a standard deviation of only 2.3% compared with LAMOL's 5.28%.", "This suggests that freezing the right component helps with task order resilience.", "A sample of accuracy graphs (as the learning progressed) of the compared methods, with the BoolQ SciFact Movies (BSM) task order is shown in Figure 4 from top to bottom, respectively.", "As the first task, BoolQ was not really affected by SciFact, but encountered a heavy drop during the third task of Movies.", "In the baseline, BoolQ dropped from 61% to a mere 6%, while only rebounding up to 26% at the end.", "However, after freezing the most plastic block identified by partial brute forcing, BoolQ dropped from 62% to 15%, and rebounding up to 47%.", "Comparatively, in the second task, SciFact encountered a smaller drop during the third task from 63% to 55%, and then 20 40 60 80 A cc u r ac y Bool-Q 20 40 60 80 A cc u r ac y Scifact 5 10 15 20 40 60 80 Bool-Q SciFact Movies Epoch A cc u r ac y Movies LAMOL Partial Brute Force R-LAMOL block R-LAMOL head R-LAMOL block (g) R-LAMOL head (g) Figure 4: Learning curves of task order BSM.", "rebounded back to 65%.", "As the last task, movies was not affected by catastrophic forgetting.", "Accuracy graphs for all permutation of tasks is available in Appendix 6 from which we make several observations concerning the effect of task orders on the overall performance: There is evidence that Movies accelerate the forgetting process of first task due to the abrupt change in data distribution.", "However, the performance on the task Movies itself is barely affected by the task order.", "We attribute it to the low difficulty of the task.", "There is usually no interference between the tasks Bool-Q and SciFact when these tasks are trained in adjacency since they are similar.", "It is unrealistic to perform brute force in every single setting.", "So, it is crucial that our algorithm uses reasonable amount of time while still maintaining improvements from the baseline.", "The CCI algorithm requires each task except task 1 to be repeated twice.", "This doubles the time needed to train a single task.", "Combined with time required for CCI, Rational LAMOL required approximately 2.4 times more time than vanilla LAMOL to completely train a model as shown in Figure", "3. On the other hand, our algorithm used only approximately half of the time it took to train in the partial brute force fashion.", "Currently, CCI only measures plasticity in between two models ( M i and M i +1 ).", "Single model analysis for layer plasticity evaluation is left for future work.", "From Table 2, Rational LAMOL block outperformed LAMOL by 1.83% average accuracy (0.97% average Macro-F1) over all permutations while having smaller standard deviation, indicating that it is also more robust to task orders.", "Rational LAMOL head was able to match or outperform LAMOL in five out of six task orders, but the significant decrease in the SBM order lowered the average to a 0.43% gain (and a slight decrease in Macro-F1) from the baseline.", "Upon further inspection, we found that the pseudo-samples of SciFact contained high variance in quality during pseudo-data replay.", "In addition to generation token mismatch, i.e., a situation where a pseudo-sample has an answer token from a wrong task, the low volume of SciFact training data affected the quality of the pseudo-samples generated.", "So, this accelerated catastrophic forgetting rather than alleviating.", "Without the SBM drop, Rational LAMOL head performed comparatively well or slightly higher with the block-level.", "Performing a one-tailed paired t-test on all data points of the total 3 random seeds, we observed that block-level freezing is able to win against the original LAMOL with statistical signifi-cance (p-value of 0.023 and 0.042 for block-level and generated block-level respectively).", "With the SBM result neglected as an outlier, both block-level and head-level significantly improved the results compared with the original LAMOL (p-value of 0.015, 0.014, 0.010, 0.049 for block-level, generated block-level, head-level, and generated head-level respectively).", "However, there is no conclusive evidence of which method (head-level or block-level freezing) being significantly better (p-value of 0.133).", "Even though our Rationale LAMOL outperformed the baseline, there was still a gap from the brute force upper bound.", "This could be due to many incompatibilities between human rationales and machine attention scores, as mentioned in Bao et al. (2018), which made our algorithm choose sub-optimal layers/heads.", "Due to the difference in focus between human and machines, it is conceivable that the rationales generated by InvRat would be mostly misaligned with human rationales.", "This is shown in Table 3, where the F1 scores of InvRat are quite low when compared with human rationales.", "Figure 5 shows an example of generated rationales output by InvRat compared with human rationales.", "Despite that, Generated Rational LAMOL block outperformed both Rational LAMOL and LAMOL baseline by 0.84% accuracy (0.31% Macro-F1) and 2.67% accuracy (1.27% Macro-F1) respectively, further reducing the gap to Brute Force, the approximate upper bound of the proposed CCI.", "This suggests that rationales chosen by InvRat, regardless of how nonsensical they appear, still carry information that eliminates the need for human rationales.", "The results are consistent with Bao et al. (2018) who showed that significant gains are achieved when using machines attention scores as an additional supervision signal instead of using human rationales.", "Last but not least, Figure 3 shows that the process of generating rationales using InvRat, including training and inference, contributed only marginally, about 15 minutes, to the total time used in the training process.", "To effectively retain learned knowledge in LL for NLP tasks, we proposed Rational LAMOL, a learning framework that uses rationales to identify and freeze the most critical components of the model while being trained on a new task.", "We showed that Rational LAMOL is able to outperform LAMOL by a significant margin.", "Furthermore, our framework can be applied to any NLP datasets by leveraging unsupervised rationale generation, eliminating the need for human rationales while maintaining comparable improvements.", "Overall, Rational LAMOL bridges the gap between LL in NLP with model understanding through rationales, exhibiting potential for a true lifelong language learning as well as limiting catastrophic forgetting." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain" ]
[ "We present BERTGEN , a novel generative, decoder-only model which extends BERT by fusing multimodal and multilingual pre-trained models VL-BERT and M-BERT , respectively.", "BERTGEN is auto-regressively trained for language generation tasks, namely image captioning, machine translation and multimodal machine translation, under a multitask setting.", "With a comprehensive set of evaluations, we show that BERTGEN outperforms many strong baselines across the tasks explored.", "We also show BERTGEN 's ability for zero-shot language generation, where it exhibits competitive performance to supervised counterparts.", "Finally, we conduct ablation studies which demonstrate that BERTGEN substantially benefits from multi-tasking and effectively transfers relevant inductive biases from the pre-trained models.", "Recent work in unsupervised and self-supervised pre-training has revolutionised the field of natural language understanding (NLU), resulting in high performance ceilings across multiple tasks (Devlin et al., 2019; Yang et al., 2019; Dong et al., 2019).", "The recent success of language model pre-training with masked language modelling (MLM) such as BERT (Devlin et al., 2019) further paved the way for more complex approaches that combine language pre-training with images (Tan and Bansal, 2019; Su et al., 2020; Lu et al., 2020), video (Sun et al., 2019), and speech (Chuang et al., 2020).", "Most of these approaches follow a task-specific fine-tuning step after the model is pre-trained.", "However, there has been little work on exploiting pre-trained MLMs for natural language generation (NLG) tasks.", "Previous work argues that the MLM objective is ill-suited for generation tasks such as machine translation (Yang et al., 2019; Rothe et al., 2020).", "Recent work in this direction has predominantly investigated the use of pre-trained models to either initialise Transformer-based encoder-decoder models (Imamura and Sumita, 2019; Clinchant et al., 2019; Yang et al., 2020; Rothe et al., 2020) or to distill knowledge for sequence generation tasks (Chen et al., 2020).", "In this work, we present BERTGEN , which extends BERT in a generative setting ( 2.1).", "This results in a single generator without a separation between the encoder and the decoder capable of consuming multiple input modalities and generating in multiple languages.", "The latter features are achieved by transferring knowledge from state-of-the-art pre-trained models, namely VL-BERT (Su et al., 2020) and multilingual BERT (M-BERT ) (Devlin et al., 2019).", "We train BERTGEN on various tasks, including image captioning, machine translation and multimodal machine translation, and datasets in four different languages ( 2.2).", "Based on a number of experiments, our findings ( 3) show that BERTGEN", "(i) is surprisingly versatile as it is capable of describing images and performing translation in unimodal and multimodal settings, across all languages,", "(ii) generalises well across zero-shot image captioning, multimodal machine translation, and out-of-domain news translation tasks, and finally", "(iii) is parameter efficient when compared to state-of-the-art models for each of the tasks combined together.", "In this section, we describe BERTGEN and the tasks we explore.", "We then detail the baselines and SoTA systems that we compare against.", "This section details the main aspects of BERTGEN that distinguish it from the existing work on vision & language pre-training.", "Initialisation.", "We take advantage of the previous successes in large-scale pre-training and propose a hybrid initialisation for BERTGEN (Figure 2).", "This involves using the VL-BERT (Su et al., 2020) checkpoint and initialising the word embeddings, the Transformer weights and the MLM head with M-BERT (Devlin et al., 2019).", "We conjecture that this primes BERTGEN to be aware of the visual modality and of multiple languages.", "This is simply due to VL-BERT being pre-trained on English monolingual and image captioning corpora, as well as M-BERT offering a 119K WordPiece vocabulary, trained on the entire Wikipedia in 104 languages 1 .", "Input configuration.", "While BERTGEN is potentially capable of modeling a variety of generative tasks, we focus on three particular tasks, namely machine translation (MT), multimodal MT (MMT) and image captioning (IC).", "Therefore, depending on the task, the input configuration of the model may change during both training and testing.", "To clarify further, let us first denote a sequence of embeddings representing a source sentence by x ( i ) = [ x ( i ) 1 , , x ( i ) m ] , its target translation by y ( i ) = [ y ( i ) 1 , , y ( i ) n ] , and a collection of k regional visual features extracted from an associated image by v ( i ) = [ v ( i ) 1 , , v ( i ) k ] .", "Figure 1 depicts BERTGEN when processing a sample from the MMT task.", "This task's input configuration is a triplet that involves all the three sequences i.e. { x ( i ) , y ( i ) , v ( i ) } .", "Using this notation, the MT and IC tasks' configurations would correspond to { x ( i ) , y ( i ) } and { v ( i ) , y ( i ) } , respectively.", "Visual embeddings.", "We follow VL-BERT and represent images as a collection of k features v ( i ) defined for regions of interest (RoI).", "After pre-extracting the 2048-dimensional RoI features using the bottom-up-top-down object detector (Anderson et al., 2018), we keep between 10 and 100 (i.e. k [10 , 100] ) of them depending on the confi-dence score.", "The final visual embedding for an RoI is obtained by summing its feature vector and its geometric embedding (i.e. the projection of the STOP MASK MASK MASK MASK Figure 3: A look at BERTGEN 's self-attention: the connections denote that self-attentive representations are re-computed in every step. The generation ends when STOP is predicted. The smileys refer to RoI features. bounding box coordinates).", "When encoding the non-visual positions, the same RoI feature vector for the full image is repeated (see Figure 1).", "We note that we do not fine-tune the object detector during training.", "Sequence unrolling.", "An important aspect of BERTGEN is that it does not explicitly distinguish between the encoder and the decoder blocks usually seen in sequence-to-sequence models.", "This is accomplished by formalising both encoding and generation using the MLM framework.", "Formally, let us consider the MMT task and define the maximum log-likelihood objective for a given triplet { x ( i ) , v ( i ) , y ( i ) } where the target y ( i ) has n tokens: L ( i ) = n (cid:88) t =1 log (cid:16) P ( y ( i ) t | x ( i ) ; v ( i ) ; y ( i ) < t ) (cid:17) (1) In a typical sequence-to-sequence model, each log-probability term would be computed by a decoder within the forward-pass of the same training example.", "In contrast, BERTGEN explicitly unrolls the example n times, forming n new training examples.", "In other words, each conditional term in Equation 1 is observed independently within an epoch of training.", "Therefore, sequence unrolling has a data augmentation effect since a training corpus with D examples is approximately augmented by a factor of the average length of the target sequences.", "Moreover, the unified encoder-decoder formalism halves the number of parameters, making BERTGEN parameter efficient.", "Self attention.", "Given that a single Transformer (Vaswani et al., 2017) performs both encoding and decoding, sequence unrolling affects self-attention as well (Figure 3).", "First, all positions attend to each other for a given unrolled example i.e. the attention is bi-directional.", "Second, since each unrolled case is an independent example, the self-attentive representations of early positions are naturally re-computed , in contrast to typical Transformer decoders.", "Finally, due to how inputs/outputs are represented in a single stream and encoded through shared self-attention, BERTGEN enforces an inductive bias towards a truly multi-modal and multi-lingual representation space.", "Target language specifiers.", "Finally, to select the language during generation, input sequences begin with special target language specifiers (Ha et al., 2016; Johnson et al., 2017) (Figure 1).", "The specifier is task-agnostic, i.e. the same specifier [DE] is used both when captioning into German and when translating into German.", "Training & hyper-parameters.", "We extend 2 the base configuration of VL-BERT which is a Transformer with 12 self-attention layers and 12 heads.", "The model and feed-forward dimensions are 768 and 3072, respectively.", "On a single 32GB V100 GPU, one epoch ( 3) takes approximately two days to complete as we could only fit one example per task (i.e. batch size equal to 13) into the memory 3 .", "We use AdamW optimiser (Loshchilov and Hutter, 2019) with base learning rate set to 1 .", "3 10 5 .", "The learning rate is warmed up in the first 16K steps and then decays linearly.", "We set the weight decay to 10 4 .", "During training, we let the model update the positional embeddings as BERTGEN needs to learn new positions not covered by VL-BERT pretraining.", "The final model has 89.3M parameters excluding the word embeddings.", "Decoding.", "At test time, we incrementally add the most likely prediction (i.e. greedy search) into the previously masked position (Figure 1) and shift the [MASK] token right by one.", "The reason we chose greedy over beam search is because the latter would make decoding much slower due to self-attentive representations being re-computed.", "The decoding ends when [STOP] is predicted.", "To evaluate BERTGEN 's generative abilities, we explore a diverse set of tasks: image captioning, text-only MT and multimodal MT. Table 1 summarises the training statistics for the various datasets we use.", "2 https://github.com/ImperialNLP/BertGen 3 With careful optimisation of the training code and mixed precision multi-GPU training, the training time can be substantially reduced.", "Image captioning (IC) involves describing images in a specified natural language.", "We train BERTGEN for English, German and Turkish captioning tasks.", "Specifically, we use the FLICKR 30 K dataset (Young et al., 2014) that provides 29K training images, each with five English captions collected through crowd-sourcing.", "The validation and test sets contain approximately 1K images each.", "We use the MULTI 30 K dataset (Elliott et al., 2016), which annotates FLICKR 30 K images with five German captions.", "Finally, we use the TASVIRET dataset (Unal et al., 2016) which provides two Turkish captions for each of the 8,092 images in the FLICKR 8 K dataset (Rashtchian et al., 2010).", "Since FLICKR 8 K is a subset of FLICKR 30 K , we create a new split of TASVIRET to avoid data leakage between training and test splits.", "The resulting training, validation and test splits contain 6914, 543, and 543 images, respectively.", "To evaluate BERTGEN 's performance on IC, we compare it against previous work with strong performance on COCO (Chen et al., 2015) and FLICKR 30 K .", "More precisely, ADAPTIVEATTENTION (SENTINEL ) (Lu et al., 2017), which uses a sentinel token to distinguish between visual and non-visual representations, and NEURALBABYTALK (NBT ), which follows a slot-filling approach through explicit object region information (Lu et al., 2018).", "Multimodal Machine Translation (MMT) attempts to improve MT quality by incorporating information from modalities other than language (Suluba-cak et al., 2020).", "In our case, we train BERTGEN for EN DE and EN FRMMT tasks and use the MULTI 30 K dataset, the main dataset for image-informed translation, which provides caption translations for FLICKR 30 K images in German and French.", "To evaluate BERTGEN on MMT tasks, we use the original 2016 test set which contains 1,000 examples.", "For a comprehensive comparison with previous work, we train a SoTA recurrent MMT (Caglayan et al., 2020) solely on the MULTI 30 K dataset, which applies a secondary (visual) attention in the decoder over the RoI features i.e. the same features that are also used by BERTGEN ( 2.1).", "There are two GRU (Cho et al., 2014) layers in both the encoder and the decoder and the embedding & hidden dimensions in the model are set to 200 and 320 , respectively.", "Each model has 5.6M parameters excluding the word embeddings.", "Besides the state-of-the-art constrained recurrent MMT model described above, we further compare BERTGEN which is trained on various other MT and IC corpora to an unconstrained Transformer-based MMT trained on 9M additional EN DE sentences (Libovicky, 2019) 4 in addition to MULTI 30 K .", "We incorporate six text-only MT tasks into our training protocol.", "We use EN DE and EN FRMT datasets from IWSLT'14 (Cettolo et al., 2012) which consists of TED Talks' subtitles and their translations.", "We take the prepare-iwslt14 recipe from FAIRSEQ (Ott et al., 2019) to prepare the dev and test sets.", "This yields an EN DE test set of 6,750 sentences which consists of dev2010, dev2012.TEDX, tst2010, tst2011 and tst2012 .", "Similarly, the EN FR test set consists of dev2010, tst2010, tst2011 and tst2012 , which amounts to 4,493 sentences.", "For EN TR directions, we use the SETIMES2 (Tiedemann, 2012) news dataset for training.", "For development and test sets, we take the official WMT test sets (Bojar et al., 2018), namely, newstest2016 and newstest2017 as the development 4 We obtained test set outputs from the author and preprocessed with M-BERT tokeniser to ensure comparability.", "set (6,007 sentences), and newstest2018 (6,000 sentences) as the test set.", "Both IWSLT and SETIMES2 corpora are medium-scale resources often used in MT research community, and have much harder test sets than the MMT and IC tasks, due to a sig-nificant domain shift.", "Finally, for each translation direction, we train a Transformer NMT model (Vaswani et al., 2017) using the IWSLT-DE-EN recipe of the FAIRSEQ toolkit (Ott et al., 2019).", "This recipe has six encoders and six decoders, each equipped with 4 -head self-attention layers.", "The model and feed-forward dimensions are set to 512 and 1024 , respectively.", "Each model has 31.5M parameters excluding the word embeddings.", "Since BERTGEN is a general purpose multilingual and multimodal generator, we expect it to perform in the same ballpark as these strong NMT baselines, but not necessarily be SoTA compared to novel & sophisticated NMT models, which also make use of a lot more training data.", "We train BERTGEN on lowercased sentences for 45 epochs, after which the overall performance on the tasks reached a plateau.", "We define one BERTGEN epoch as a single pass over all of the training data for the MULTI 30 KEN DEMMT task and denote this task as the reference task .", "We use greedy search for all systems that we trained and merge back the word pieces before evaluation.", "We compute tokenised 5 BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014) and CIDEr (Vedantam et al., 2015) using coco-caption 6 .", "In what follows, we provide detailed quantitative and qualitative findings.", "Table 2 provides an overview of BERTGEN 's image captioning performance on different test sets and languages.", "First of all, on English FLICKR 30 K , BERTGEN is clearly able to outperform strong captioning models ( 2.2.1) SENTINEL (Lu et al., 2017) and NBT (Lu et al., 2018), even though they use beam search for decoding.", "On COCO (Chen et al., 2015), an image captioning corpus much larger and diverse than FLICKR 30 K , we evaluate BERTGEN on Karpathy's test split (Karpathy and Fei-Fei, 2015) and notice that the scores are reasonable 5 Since M-BERT is aggressive on splitting apostrophes and hyphens, our results may slightly differ from other work.", "given that BERTGEN is not trained on COCO: our model lags behind NBT (w/ beam search) by 6.7 METEOR.", "For zero-shot French captioning (F30K FR ), we resort to the reference MMT translations from the MULTI 30 KEN FR task, as there are no human references for French.", "Although this is problematic as the metrics will penalise captions that are not translations of English captions, we provide the scores to show that the zero-shot outputs are valid descriptions.", "We note that the low range of scores reported here is also due to having one reference caption instead of five references 7 as in FLICKR 30 K .", "Finally we report results for our custom Turkish split ( 2.2.1) (F30K TR ) and German (F30K DE ).", "Even though there are no comparable results in the literature for these three tasks, we demonstrate through some qualitative examples that BERTGEN produces sensible outputs.", "Qualitative examples.", "We now focus on a few examples to examine the multilingual image captioning ability of BERTGEN in action (Table 3).", "For the first image, all captions are almost the same as the image has few salient points.", "For the second image however, we observe much more variation across captions, in line with the complexity of the scene.", "We are particularly surprised by the zero-shot French captioning performance, a task that BERTGEN is not trained for at all.", "Upon manual inspection, we noticed that the captions are often short, objective gists of the images.", "These observations also hold for the captions generated for the 7 As a reference, evaluating English captions using one reference at a time, yields 7.9 BLEU on average, compared to 27.0 BLEU in Table 2.", "EN a man in a red shirt and helmet is riding a motorbike on a dirt road.", "DE ein mann fahrt mit einem motorrad auf einem weg an einem flu entlang.", "a man rides a motorcycle on a path along a river.", "TR camurlu bir yolda motoruyla ilerlemekte olan krmz ustlu bir adam ve arkasndaki dag manzaras.", "A man in a red top riding his bike down a muddy road with a mountain landscape behind him.", "FR un homme avec un casque fait du motocross.", "a man with a helmet rides motocross.", "COCO test set, as we can see in the third example.", "A set of additional examples in the Appendix shows that BERTGEN does not simply retrieve caption translations learned from the EN FR task.", "Overall, both quantitative and qualitative results provide evidence of the utility of multimodal and multilingual initialisation as well as the efficacy of knowledge transfer across different tasks for image captioning.", "Table 4 summarises BERTGEN 's performance on MMT.", "First of all, BERTGEN consistently outperforms the Transformer-based FAIRSEQNMT models and the recurrent MMT (Caglayan et al., 2020) models on both the EN DE and the EN FR language pairs.", "Furthermore, BERTGEN is also substantially better than a state-of-the-art unconstrained MMT (Libovicky, 2019) model trained on a 6x larger parallel corpus.", "Adversarial evaluation.", "Following Elliott (2018), we probe BERTGEN 's ability for integrating multiple modalities effectively.", "Specifically, we decode translations by shuffling { image, source caption } mappings so that the images do not correspond to the sentences to be translated.", "The EN DE results showed that the incongruence leads to 1 .", "1 and 0 .", "9 point drops in BLEU and METEOR, respectively.", "For EN FR , the drops are much more prominent with 3 .", "1 and 2 .", "3 points again for BLEU and METEOR.", "This indicates that the features are not ignored at all, unlike in (Caglayan et al., 2019), where they showed that sequence-to-sequence MMT models can learn to ignore the images when the linguistic signal is sufficient to perform the task.", "in Table 4 show the surprising ability of BERTGEN to perform MMT on directions unseen during training.", "Moreover, the zero-shot performance surpasses strong MMT and NMT systems by up to 2 and 3.3 METEOR for DE FR and FR DE , respectively.", "Similar to the image captioning results, this demonstrates the potential of BERTGEN to generalise over a variety of language pairs and tasks.", "First, we compare BERTGEN 's performance to each task-specific FAIRSEQ system.", "According to Table 5, we observe that the translation quality of BERTGEN is generally superior compared to the strong FAIRSEQ systems, especially in METEOR, where BERTGEN leads in all pairs.", "Second, we look at the learning efficiency by comparing the training curves between BERTGEN and each task-specific FAIRSEQ system (Figure 4).", "Here, the x axis represents how many times the spe-D E FRFR DEBLMTBLMTBERTGEN 19.6 40.5 13.1 36.7 TARTU 39.5 59.0 26.3 47.3 MSRA 46.5 64.2 38.2 56.4 Table 6: Zero-shot BERTGEN performance on WMT'19 test set: TARTU and MSRA systems are not zero-shot as they are trained on DE FR corpora.", "cific task's training set has been seen by the models.", "BERTGEN is trained for 45 reference epochs ( 3), and this corresponds to only a few complete passes over the training sets of NMT tasks 8 .", "This is in contrast to the single-task systems that usually require a large number of epochs for convergence.", "We notice a general trend and observe that BERTGEN tends to outperform single-task systems usually after only a few passes over the corresponding training set.", "Many factors could be contributing to this observation such as sequence unrolling, multitasking, shared input space or relevant inductive biases transferred from M-BERT .", "We partly address these in the ablation studies ( 3.4) and leave further investigation to future work.", "Zero-shot performance.", "We use the DE FR test set from the WMT'19 shared task on news translation (Barrault et al., 2019) to assess the zero-shot translation capability of BERTGEN .", "This test set includes 1,701 sentences from news data regarding European Elections .", "We compare our results to two shared task systems, namely TARTU (base-line) and MSRA (state-of-the-art) (Barrault et al., 2019), after re-tokenising them accordingly with M-BERT 9 .", "Although BERTGEN is expected to obtain lower scores than the dedicated WMT systems due to the domain mismatch of the test set, we consider both the quantitative (Table 6) and the qualitative results (Table 7) extremely encouraging.", "We train single-task MMT systems on the MULTI 30 KEN DE language pair.", "Specifically, we begin with a baseline system which is initialised with random weights.", "We then train a second baseline where only the visual processing layers are 8 For example, only 3 passes over SETIMESEN TR .", "9 TARTU is the baseline and MSRA is the best performing system for the shared task BERTGEN : la decision est tombee au 70`eme anniversaire de ma femme.", "transferred from VL-BERT .", "Finally, we train a third baseline that is initialised similar to BERTGEN , i.e. using the hybrid initialisation ( 2.1).", "Figure 5 compares the validation BLEU scores of these three systems.", "We observe that the benefits of knowledge transfer from pre-trained models are incrementally positive, however, BERTGEN 's hybrid initialisation outperforms the other two ablations.", "We now remove the multi-tasking aspect from BERTGEN to investigate the extent to which the performance improvements are related to other tasks.", "Similar to 3.4.1, we focus on the MULTI 30 KEN DEMMT task and train a single-task, hybrid-initialised BERTGEN .", "Figure 6 compares the validation BLEU scores obtained by the default BERTGEN and the single-task variant.", "We observe that BERTGEN benefits from multi-task training and, more importantly, does not seem to exhibit patterns 0 5 10 15 30 35 40 45 50 BLEU Passes over EN-DE MMT data Single Task Multi Task 20 Figure 6: Validation scores on MULTI 30 KEN DEMMT for the multi-tasking ablation: The default multi-task BERTGEN outperforms the single-task one.", "of catastrophic forgetting (French, 1999).", "Based on these observations, we expect similar model behavior to hold for other tasks.", "Research in NLP and related fields has been increasingly focusing on transfer learning approaches where a model is first pre-trained on a data-rich task, and then transferred to downstream tasks (Mc-Cann et al., 2017; Peters et al., 2018; Devlin et al., 2019).", "This framework presumably allows the model to capture useful inductive biases that generalise to a variety of NLP tasks, often after performing a task-specific fine-tuning (Raffel et al., 2020).", "Of these, the most relevant studies to our work are BERT (Devlin et al., 2019) and its multilingual version M-BERT , which pre-train a Transformer (Vaswani et al., 2017) on large monolingual corpora using the masked language modelling (MLM) objective.", "Recent research has also attempted to combine linguistic inputs with other modalities such as vision and speech, to achieve a grounded understanding of meaning.", "Successful approaches including LXMERT (Tan and Bansal, 2019), VL-BERT (Su et al., 2020) and others (Lu et al., 2019; Li et al., 2020a,b) achieve this by combining BERT's MLM objective with auxiliary tasks such as masked region classification and image sentence matching, and pre-train their model on large-scale image captioning corpora (Chen et al., 2015; Sharma et al., 2018).", "Similarly, SpeechBERT extends BERT by jointly training on speech and text data (Chuang et al., 2020).", "Although SoTA results are reported by these approaches, they focus on unimodal and multimodal natural language understanding (NLU) tasks, with a strong emphasis in English.", "The backbone of BERTGEN combines VL-BERT (Su et al., 2020) with M-BERT (Devlin et al., 2019) to realise a multilingual and multimodal generator that can be used for a diverse set of generative tasks and languages rather than NLU tasks.", "Previous work has studied how to benefit from pre-trained BERT models in generative tasks such as NMT (Imamura and Sumita, 2019; Clinchant et al., 2019; Zhu et al., 2020).", "BERTGEN differs from these as it is not fine-tuned for a particular MT corpus and it exhibits multi-lingual and multimodal properties for general purpose generation.", "Another related branch of work explores pretraining strategies specific to sequence-to-sequence tasks.", "This includes MASS (Song et al., 2019), which exploits an encoder-decoder framework with the MLM objective for task-specific generative pre-training and UniLM (Dong et al., 2019), which introduces uni-directional, bi-directional and sequence-to-sequence LM objectives by carefully adjusting the self-attention masks during training.", "Zhou et al. (2020) extend UniLM to vision & language pre-training using Conceptual Captions (Sharma et al., 2018) as the pre-training dataset.", "However, these models require a further fine-tuning step for generative tasks, unlike BERTGEN that is trained only once .", "Several approaches exist for multi-task learning & generation (Dong et al., 2015; Luong et al., 2016) in NLP, especially in multilingual NMT, where tasks denote different language pairs (Zoph and", "Knight, 2016; Firat et al., 2016).", "The multi-task (and zero-shot) generation ability of BERTGEN is mostly inspired by Ha et al. (2016) and Johnson et al. (2017).", "Both of these introduced target language specifiers to select the output language when decoding translations from their model.", "Our multilingual & multimodal take on multitask generation is most similar to Kaiser et al. (2017), where a single Transformer model is trained on different tasks including image captioning, object classification, machine translation, speech recognition and parsing.", "However, their architecture depends on particular structures such as encoders, decoders, modality-specific networks and I/O mixers, unlike BERTGEN which does not require task-specific modules.", "In this paper, we presented BERTGEN , a novel generative, decoder-only model which extends BERT by combining multimodal and multilingual pre-trained models.", "Our findings show that BERTGEN obtains strong performance on a variety of generative tasks and further generalises over unseen tasks.", "Importantly, our model demonstrates the potential for general-purpose (instead of task-specific) generation that is above and beyond the traditional pre-training and fine-tuning practices.", "BERTGEN is also parameter efficient as it has 89.3M total parameters and is trained on thirteen tasks encompassing MT, multimodal MT and image captioning.", "On the other hand, each of the single-task FAIRSEQNMT baselines has 31.5M parameters.", "Our ablation studies show that BERTGEN is able to efficiently transfer relevant inductive biases from the pre-trained models and benefits from multi-task learning without suffering from catastrophic forgetting.", "We hope that these findings will motivate future research in exploiting more sophisticated pre-trained models in place of M-BERT and VL-BERT and others.", "This paper is a follow-up work to the MSc.", "Thesis of Faidon Mitzalis, co-supervised by Prof. Lucia Specia and Dr. Ozan Caglayan.", "Lucia Specia, Pranava Madhyastha and Ozan Caglayan received support from MultiMT project (H2020 ERC Starting Grant No. 678017).", "Lucia Specia also received support from the Air Force Office of Scientific Research (under award number FA8655-20-1-7006)." ]
[ "objective", "abstain", "objective", "result", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "objective", "result", "objective", "abstain", "abstain", "result", "objective", "other", "other", "other", "other" ]
[ "Shoval Sadde", "Bar Ilan University [email protected]", "Hebrew University of Jerusalem [email protected]", "Georgetown University [email protected]", "Abstract Modality is the linguistic ability to describe events with added information such as how desirable, plausible , or feasible they are.", "Modality is important for many NLP downstream tasks such as the detection of hedging, uncertainty, speculation, and more.", "Previous studies that address modality detection in NLP often restrict modal expressions to a closed syntactic class , and the modal sense labels are vastly different across different studies, lacking an accepted standard.", "Furthermore, these senses are often analyzed independently of the events that they modify.", "This work builds on the theoretical foundations of the Georgetown Gradable Modal Expressions (GME) work by Rubinstein et al. (2013) to propose an event-based modality detection task where modal expressions can be words of any syntactic class and sense labels are drawn from a comprehensive taxonomy which harmonizes the modal concepts contributed by the different studies.", "We present experiments on the GME corpus aiming to detect and classify fine-grained modal concepts and associate them with their modified events.", "We show that detecting and classifying modal expressions is not only feasible, but also improves the detection of modal events in their own right.", "Modality refers to the linguistic ability to describe alternative ways the world could be.", "1 Modal expressions aim to identify wishes, rules, beliefs, or norms in texts (Kratzer, 1981; Portner, 2009), which is a crucial part of Natural Language Understanding (NLU) (Morante and Sporleder, 2012).", "Concretely, events in natural language are often reported in a manner that emphasizes non-actual perspectives on them, rather than their actual propositional content.", "Consider examples (1a)(1b): Equal contribution 1 In formal semantics, these alternatives are referred to as possible worlds or situations (Kripke, 1959; Lewis, 1973; Barwise and Perry, 1981; Kratzer, 2010).", "The propositional content p = present a paper at ACL'X can be easily verified for sentences (1a)-(1b) by looking up the proceedings of the conference to (dis)prove the existence of the relevant publication.", "The same proposition p is still referred to in sentences (2a)(2d), but now in each one, p is described from a different perspective: (2)", "a. We aim to present a paper at ACL'21.", "b. We want to present a paper at ACL'21.", "c. We ought to present a paper at ACL'21.", "d. We are likely to present a paper at ACL'21.", "These sentences cannot be verified or falsified simply by examining whether p actually came or will come to pass, and in fact, such verification is not the goal of this way of reporting.", "Rather, speakers describe such events in order to indicate PLANS (2a), DESIRES (2b), NORMS (2c), or the assessed PLAUSIBILITY (2d) of the associated propositional content p .", "Investigating how to classify these perspectives on events has been the focus of extensive research on modality in theoretical linguistics (Kratzer, 1981; Palmer, 1986; Portner, 2009).", "In terms of NLP technology, modal concepts as expressed in (2) are relevant to many downstream tasks, such as the automatic detection of hedging and speculation (Vincze et al., 2008; Malhotra et al., 2013), uncertainty (Vincze et al., 2008; Miwa et al., 2012; Zerva et al., 2017; Prieto et al., 2020), opinion (Wiebe et al., 2005; Rubin, 2010; Miwa et al., 2012), and factuality (Saur and Pustejovsky, 2009; Rudinger et al., 2018).", "Although these tasks rely on modality features, so far there is no accepted standard for modal concepts and labels, which aligns with the semantic space of modal senses that linguists identify.", "Consequently, modality features are either treated idiosyncratically or are absent from semantic frameworks (Donatelli et al., 2018, 4.6).", "In support of such downstream tasks, a different type of NLP investigations targets modality annotation and detection in its own right (Ruppenhofer and Rehbein (2012); Baker et al. (2012); Zhou et al. (2015); Marasovic and Frank (2016); Hen-drickx et al. (2012); Nissim et al. (2013); Ghia et al. (2016); Mendes et al. (2016); Lavid et al. (2016), and others).", "However, each of these studies creates its own scheme, and none of these schemes has been picked up as an accepted standard by the community.", "Moreover, different endeavors suffer from one (or more) of the following types of deficiencies with respect to their expressivity and coverage.", "First, many studies limit the modal triggers , i.e., the expressions that trigger the modal meaning, to a closed class of auxiliary verbs (e.g., can, might, should, must in English (Ruppenhofer and Rehbein, 2012; Marasovi c et al., 2016; Quaresma et al., 2014)).", "However, as acknowledged by linguists (Kratzer, 1981) and NLP researchers (Rubin, 2010; Baker et al., 2012; Nissim et al., 2013), words of any Part-of-Speech (POS) can trigger modality.", "Consider, for instance, the following triggers: We should remain calm (AUX); We have a plan to reduce the costs (NOUN); Our agency prefers this equipment (VERB); Marx is probably patriotic (ADV); Devaluation has been necessary (ADJ).", "Second, the modal senses , i.e., the labels that indicate the modal perspectives, differ from one study to another, with no accepted standard.", "Some studies focus only on a particular sense, such as epistemic modality (Rubin, 2010; Ghia et al., 2016).", "Others use labels that mix modal senses with orthogonal notions (e.g., force , distinguishing permission from requirement as in Baker et al. (2012)), thereby making their deployment into existing annotations and tasks less transparent.", "In general, there is no single annotation standard that covers the full spectrum of modal senses attested in the data and confirmed by the latest linguistic theories, as portrayed by Portner (2009).", "Finally, modality detection in NLP has often been cast as a word-sense disambiguation (WSD) task (Ruppenhofer and Rehbein, 2012) or as a sentence-classification task (Marasovic and Frank, 2016).", "Both perspectives are insufficient for any practical use.", "The latter is too coarse-grained, as a sentence may contain multiple events, each of which potentially carries a different modal sense.", "The former is uninformative, because the modal trigger is not explicitly associated with the event being modified.", "Ghia et al. (2016) take a step in the right direction, offering to annotate modal sense constructions .", "The current work proposes to address all of the aforementioned deficiencies as follows.", "We define a prediction task that we term event-based modality detection , where, given a sentence as input, we aim to return all of its modal triggers , their associated modal senses , and, for each trigger, the respective event being modified.", "Crucially, the modal triggers can be from any syntactic class.", "The modal senses are drawn from a single taxonomy that we motivate based on linguistic research and which harmonizes the different modal concepts contributed in previous studies (3).", "Finally, we propose to view modal triggers as semantic modifiers of eventive heads in event-based (a.k.a., Neo-Davidsonian; Parsons (1990)) semantics.", "This is motivated by practical concerns when extracting events from texts to benefit downstream tasks, one would want easy access to the features that indicate the perspective on each event, above and beyond its participants.", "The accompanying annotation standard we assume for the task is based on the Georgetown Gradable Modal Expressions (GME) framework (Rubin-stein et al., 2013), with two simplifications that are designed to allow for more consistent annotations and increased ease-of-use by non-experts.", "First, we change the modal sense labels to be intuitive and self-explanatory.", "Second, instead of the event span (a.k.a., prejacent ) in the GME, we mark the head of the event being modified.", "To assess the feasibility of the proposed task, we use the GME corpus (Rubinstein et al., 2013) to train and test the automatic detection of modal triggers , their senses , and associated events .", "Our experiments show that while identifying a closed set of auxiliary verbs as modal triggers is straightforward, expanding the set of triggers to any syntactic class indeed makes it a harder task.", "Notwithstanding this difficulty, we show that a model based on large pre-trained contextualized embeddings (Liu et al., 2019) obtains substantial improvements over our baseline on the full task.", "Moreover, we show that detecting modalized events in fact improves with the availability of information about the modal triggers.", "All in all, we contribute a new task, a new standard and a set of strong baselines for the event-based modality task we defined.", "Modal expressions allow language users to discuss alternative realities.", "For example, the sentence She can reach the ceiling is modal because it describes the event of her reaching the ceiling as feasible, but potentially non-actual.", "Similarly, She hopefully will reach the ceiling is modal because it describes such an event as desirable, and likewise potentially non-actual.", "A sentence like She was reported to reach the ceiling describes the event of her reaching the ceiling as potentially actual, according to one's state of knowledge, yet implying that in reality it could have been otherwise.", "Over the last 40 years linguists have achieved an increasingly refined understanding of how to classify modal senses.", "The most traditional and fundamental distinction is between epistemic modals and non-epistemic modals (also called root modals).", "Epistemic modals have to do with knowledge and plausibility of the event actually happening.", "Non-epistemic modals have to do with agent actions and motivations underlying the events.", "2 Epistemic modality is not a unified class.", "Some modals express a perspective on the event that is based on knowledge, while others express a perspective related to the objective chance of the event happening (a.k.a., circumstantial modality in Kratzer (1981)).", "Furthermore, linguists posit two types of non-epistemic modal senses: one which focuses on the objective abilities and dynamic unfolding of events (Palmer, 1986), and another which focuses on subjective reasons to prioritise one event over another (Portner, 2009).", "Within the latter subtype there are further subdivisions according to whether the event is prioritised in terms of norms ( deontic ), desires/preferences ( bouletic ), or goals/plans ( teleological ) (Kratzer, 1981; Portner, 2009; Rubinstein, 2012; Matthewson and Truckenbrodt, 2018).", "The traditional three-way classification of modal senses into deontic, epistemic , and dynamic , which has been used in previous NLP work (e.g., Ruppenhofer and Rehbein (2012); Marasovic et al. (2016)), did not attend to these subdivisions, which are nonetheless expected to be important for reasoning and other tasks that require deep understanding.", "Baker et al. (2012) make finer-grained distinctions 2 The same split is motivated also on syntactic grounds: epistemic modals appear in high positions in the syntactic structure, in particular above tense and aspect, while root modals appear lower in the structure, closer to the verb phrase (see Hacquard (2010) for an overview).", "in the non-epistemic case, distinguishing between requirements, permissions, wants, and intentions, but not all of these in fact track distinct modal senses.", "For example, their require modality con-flates both rule-based obligations and goal-oriented preferences.", "Most importantly, the discussion of modality in NLP often resorts to linguistic regimes that are not understandable by non-linguists and non-expert practitioners, making the output of these systems essentially unusable for NLP engineers and designers of downstream tasks.", "This paper aims to bridge this gap, offering a single task and annotation standard that cover the rich space of concepts, while being intuitively understandable and easy-to-use.", "A Note on Modality vs. Factuality.", "A related but different line of work in NLP investigates the automatic identification and classification of the factual status of events (Saur and Pustejovsky, 2009; Rudinger et al., 2018).", "That is, the factuality classification task has to do with automatically detecting whether, in actuality, a reported event has happened or has not happened .", "3 It is important to note that factuality and modality are distinct and completely orthogonal notions (see, e.g., Ghia et al. 2016).", "For example, the sentences The WSJ announced that she reached the shore and She was able to reach the shore share the propositional content of p = she reached the shore ' and its implied factuality status (happened), but differ in the manner of reporting the event p .", "The former is based on knowledge, while the latter puts emphasis on the ability of the agent in p .", "It is precisely this change of perspective that is missing in the realm of NLU and related downstream tasks.", "The upshot of Rudinger et", "al.'s (2018) work is the claim that factuality is determined at event level, and that expressions contributing to factuality may be of any syntactic class.", "We likewise propose to relate modal triggers to an event being modified, and we similarly adopt an inclusive view of the syntactic classes that express modality.", "In contrast to event-based factuality detection, as proposed by Rudinger et al. (2018) and others, which classifies which events came to pass, event-based modality detection as proposed here classifies an orthogonal dimension of meaning related to semantic properties of events that may be non-actual, providing information about why they are portrayed as such.", "We propose an event-based modality detection task that rests upon three assumptions:", "(i) the set of possible modal triggers is open-ended, and may be of any POS tag,", "(ii) the associated modal senses are fine-grained and form an hierarchical taxonomy, and", "(iii) each trigger is associated with an event .", "In these examples, the words in bold indicate the modal expression, which we call a trigger .", "The co-indexed items in italics mark the head of the event for which the modal perspective is ascribed.", "In (3a), reported ' triggers a modal perspective on the event of being (in custody) '.", "In (3b), believed ' triggers a modal perspective on the making ' event, and possible ' indicates a modal perspective on the seeing (the satellite) ' event.", "Clearly, the modal perspectives on these events, i.e., the modal senses, are of different types.", "How should we label these fine-grained modal senses?", "A Hierarchical Taxonomy of Modal Senses Having established that a given expression serves as a modal trigger, we are interested in classifying the particular sense, or perspective, that it assigns to the modal event.", "Figure 1 presents the complete taxonomy that we propose for modal sense classification in NLP.", "It is based on the modal senses proposed and justified by Rubinstein et al. (2013), with a few simplifications that make it intuitive and easy-to-use by NLP practitioners and non-linguists.", "4 The highest level of the hierarchy tracks the distinction between events whose PLAUSIBILITY is being assessed, and events whose PRIORITY is stated.", "More specifically, plausibility has to do with events that are expected to happen or not happen, given a relevant set of assumptions which are made explicit.", "Plausibility can be assessed based on our state of knowledge (I heard i she got married i \"), based on what is objectively probable due to facts about the world (The ice cream will definitely i melt i in the sun\"), or based on inherent (physical) abilities of an agent (I can i easily swim i 10 km\"). 4 Cf. Manning's Law, item 5 https://en. wikipedia.org/wiki/Manning's_Law Priority Norms and Rules the ballot which must be held by the end of March Desires and Wishes we do support certain limitations on the villains Plans and Goals a necessity emerged to enter the Pilgrim's House Plausibility State of Knowledge The ship is believed to carry illegal immigrants State of the World The disease can be contracted if a person is bitten State of the Agent They are able to do whatever they want Table 1 : Modal-Sense Examples In contrast, the PRIORITY branch marks a perspective where events are prioritized, or considered good by the speaker (or more generally, by a relevant attitude holder) (Portner, 2009). Events can be preferred because they are normatively obliged or commendable (You should i n't drink and drive i ), because they realize a goal (\"The plan i to reduce i costs in Q2\"), or because they are otherwise desirable, as a matter of personal taste or preference (I will preferably i meet i them over lunch).", "To make these notions accessible, we assign intuitive labels to these fine-grained concepts.", "On the PLAUSIBILITY side, we distinguish plausibility based on the state of KNOWLEDGE (previously, epistemic ), plausibility based on a state of the WORLD ( circumstantial ), and plausibility based on the objective abilities of the AGENT ( dynamic ).", "On the PRIORITY side, we distinguish priorities based on RULES AND NORMS ( deontic ), priorities based on DESIRES AND WISHES ( bouletic ), and priorities based on PLANS AND GOALS ( teleological ).", "As illustrated in Table 2, modal triggers on both sides of the sense hierarchy may be of any POS tag.", "The proposed taxonomy unifies and harmonizes the different modal senses offered by previous studies.", "Importantly, we enrich the epistemic-deontic-dynamic classification used in previous NLP research (Ruppenhofer and Rehbein, 2012; Marasovic and Frank, 2016) with the finer-grained notions introduced by Rubinstein et al. (2013) and refer to the various labels in work by Baker et al. (2012); Mendes et al. (2016).", "More concretely, in GME and in our taxonomy, what in previous annotations was a monolithic deontic class (Rup-penhofer and Rehbein, 2012; Marasovic and Frank, 2016) now corresponds to the PRIORITY node, with three linguistically-motivated sub-classes (Portner, Modality Priority by Rules and Norms ( deontic ) by Desires and Wishes ( bouletic ) by Plans and Goals ( teleological ) Plausibility by State of Knowledge ( epistemic ) by State of the World ( circumstantial ) by State of the Agent ( dynamic ) Figure 1 : The Proposed Hierarchical Taxonomy of Modal Senses Priority Plausibility Aux We should remain calm there is little I can do Verb Our agency seriously needs equipment powers that enable him to defend the rights Noun a plan to reduce carbon-dioxide emissions their incapacity to put crime under control Adverb Marx is sufficiently patriotic President Mugabe easily won Zimbabwe's election Adjective devaluation was necessary this complex decision was not easy for him Table 2 : Modal Triggers with Diverse Parts-of-Speech Tags: Sentence Excerpts from the GME corpus.", "2009): a RULES-AND-NORMS class, a DESIRESAND-WISHES class, and PLANS-AND-GOALS .", "Among modal events that do not involve priorities or norms, the sub-class which concerns the state of an AGENT corresponds to dynamic modality in previous studies (Ruppenhofer and Rehbein, 2012; Marasovic et al., 2016).", "The two other subclasses of plausibility modality, state of WORLD and state of KNOWLEDGE taken together, correspond to epistemic in these previous works.", "To justify our fine-grained distinction, consider how the latter two senses, state of the WORLD and the state of KNOWLEDGE , correspond to interesting applications in the BioNLP literature, where it is vital to distinguish fact from analysis (Miwa et al., 2012).", "The difference is seen in the interpretations of may in the following examples from the BioScope corpus (Vincze et al., 2008): (4)", "a. Symptoms may include fever, cough or itches.", "b. The presence of urothelial thickening and mild dilatation of the left ureter suggest that the patient may have continued vesicoureteral reflux.", "In (4a), we classify may to the plausibility branch with a state of the WORLD sub-class.", "In Miwa et", "al.'s terms this would be referred to as fact .", "In (4b), we classify may to the plausibility branch with a state of KNOWLEDGE sub-class.", "In Miwa et", "al.'s terms this would be referred to as analysis .", "Goal We set out to assess the feasibility of our proposed event-based modality task.", "Concretely, we would like to gauge how well we can learn to detect and classify the different levels of modal senses afforded by our taxonomy (3) and to identify the events modified by the triggers.", "Data Our experiments use the Georgetown Gradable Modal Expressions Corpus (GME; Rubinstein et al. (2013)), a corpus obtained by expert annotations of the MPQA Opinion Corpus (Wiebe et al., 2005).", "The MPQA corpus is a 301,090-token corpus of news articles, which, following Ruppenhofer and Rehbein (2012), has become a benchmark for the annotation of modality.", "The GME corpus annotates various properties of modal expressions, including their sense in context, the proposition they apply to, the polarity of their environment, and whether or not they are qualified by a degree expression.", "5 Rubinstein et al. (2013) claim inter-annotator agreement scores as follows: Krippendorf's = 0 .", "89 for a 2-way distinction corresponding to Priority versus Plausibility, = 0 .", "49 for their finest-grained sense classification, and = 0 .", "65 for prejacent span detection.", "We processed the corpus by extracting the modal triggers and their corresponding proposi-5 See Rubinstein et al. (2013) for details about the annotation process and the full scheme of annotated features.", "tional spans ( propositional argument in GME) into a CoNLL-formatted file.", "Using spaCy (Honnibal et al., 2020), we obtained the lemmas, POS tags, and dependencies.", "The topmost head of the propositional span is considered the head of the event being modified.", "We transformed the spans of modal propositions into BIO-tags, as shown in Table 3. We shuffled and split the data into 90% training and validation sets, and a 10% test set.", "The training and validation set was then split into 5 folds, and in each fold, 20% of the sentences were randomly assigned to validation, 80% to training.", "As opposed to Marasovic and Frank (2016), who trained and evaluated only on sentences already known to contain modal triggers, we use the entire dataset, including sentences with no modality.", "6 Corpus Statistics The GME corpus, containing 11K sentences, shows that modality is a pervasive phenomenon (modal triggers were found in 96% of the documents and in 48% of the sentences).", "We find in the corpus 8318 modal triggers which correspond to 1502 unique types.", "Aside from verbs, nouns (e.g., rights, possibility ) and adjectives (e.g., fair, important ) are among the most frequently used modal expressions, with verbs making up 37% of the modals in the corpus, adjectives 30%, and nouns 20%.", "The remaining modals are either adverbials, auxiliaries, or particles.", "While most verbal triggers are modal verbs (e.g., could, must, should ; MV henceforth), 38% have other POS tags.", "736 triggers appear only once in the entire corpus with a modal meaning.", "7 About 25% of modal triggers are ambiguous in terms of their modal sense (Plausibility vs. Prior-ity), posing an additional classification challenge on top of the varied distribution of trigger POS tags.", "Modal triggers can also be multi-word expressions, with about 200 such instances in the corpus (e.g., have to ).", "The modal-triggers' sense-labels are rather balanced: 48% of the triggers in the corpus belong to Plausibility' and 52% to Priority'.", "For the finer-grained senses, the most common and least common classes make up 33% and 7% of the corpus, respectively.", "6 The processed data is available at https://github.", "com/OnlpLab/Modality-Corpus .", "7 Words like can and right have non-modal meanings in addition to modal meanings.", "1. MODALSENSECLASSIFICATION .", "Here we aim to classify the modal sense of a trigger, assuming a modal trigger is already known.", "Specifically, we examine the contribution of the context to the lemma.", "We perform sense classification with the following variations:", "(i) Vote : a majority vote,", "(ii) Token : out of context token-based classification where the trigger token is encoded using GloVe (Pennington et al., 2014)),", "(iii) Context : Token-in-context classification, given the whole sentence encoded with RoBERTa (Liu et al., 2019) as input, with a marked trigger position,", "(iv) Masked : given the sentence encoded with RoBERTa but with the trigger masked,", "(v) Trigger+Head : only the trigger word and event head are given, encoded with RoBERTa, and finally,", "(vi) Full+Head : the full sentence is encoded using RoBERTa with both the trigger and the event head marked.", "2. MODALITYDETECTION ANDCLASSIFICATION .", "This is a realistic scenario, where we do not assume the trigger is known.", "We aim to both identify the trigger and label its sense.", "We model this as a tagging task.", "Every token in the corpus is assigned a BIOSE tag if it belongs to a modal trigger, which is appended with a suffix indicating its modal sense.", "We additionally perform variations of this task by including the head of the event as a feature (with either gold or predicted heads).", "Table 3 shows an example of the BIOSE tagging of modal triggers, with and without the event.", "3. MODAL-EVENTDETECTION .", "Detecting and classifying modal triggers in isolation is insufficient for applications, as it is crucial to detect the event being modified.", "Here we predict a modal event and aim to relate it to its trigger and modal sense.", "We model this as sequence labeling, with the different tagging schemes to indicate the event being modified.", "First, we aim to detect only the event.", "In", "(i), we predict BIO tags for the propositional spans.", "In", "(ii), we predict a HEAD label for the event head.", "Next, we aim to jointly predict the modal triggers and their modified events.", "To this end, in", "(iii) we predict BIOSE-{E|T} for the event span, concatenating the related modal trigger.", "That is, within a single event span marked with BIO, E marks the propositional content and T marks the trigger.", "We experiment with and without the modal sense appended to the trigger.", "Finally, in", "(iv) we predict BIOSE-{sense} tags that indicate the modal trigger along with a HEAD tag for the event head.", "Table 3 : Representing Event-Based Modality Using a BIO Tagging Scheme.", "On the left, the BIOSE -label tags are used to label the modal triggers.", "In the middle column BIO tags track the modal triggers, and H indicates the event head.", "On the right, the BIO tags track the event span, with the T and E labeling the trigger and event span respectively.", "Table 4 : Modal Sense Classification with Oracle Triggers.", "The labels that indicate modal sense are drawn from the proposed hierarchy, and we experiment with multiple levels of granularity: Modal/Not Modal : a binary distinction, indicating if the token is a modal trigger or not.", "Coarse-grained : a 3-way distinction, indicating if the token is a modal trigger, and if so, what coarse-grained sense it has (Plausibility vs. Priority).", "Fine-Grained : indicating if the token is a modal trigger, and if so, which one of the senses at the lowest level of the hierarchy it has.", "We conflated Desires/Wishes and Plans/Goals into a single type called Intentions , since both these senses are under-represented in our corpus.", "See appendix A for the complete label distribution in our data.", "Evaluation Metrics We report for all experiments BIOSE -chunk Precision, Recall, and (Macro) F1, calculated with the official ConllEval script (Sang and Buchholz, 2000).", "When evaluating span tagging for event-based modality we report labeled and unlabeled scores.", "When we report unlabeled F1 for trigger classification, we check whether the token has been correctly identified as modal vs. not-modal, regardless of its sense.", "Models Our baseline for modal trigger detection is a simple majority vote baseline where each token Baseline RoBERTa MVALL MVALL Modal/Not 99.04 68.24 99.9 73.2 Coarse-Grained 93.29 63.94 93.3 68.9 Fine-Grained 73.48 55.23 78.5 58.14 Table 5 : The Diversity of Modal Triggers: F1 of MV triggers vs. All triggers, Majority Vote Baseline vs. RoBERTa in the test set is tagged with its most frequent label in the training set.", "For detecting modal triggers as well as for event detection, we experiment by fine-tuning a ROBERTA -based classifier (Liu et al., 2019).", "8 The encoded sequence is fed through a linear layer with a softmax function predicting the appropriate tag for a given token.", "For the shorter spans (modal triggers) we predict the tag for every token-in-context.", "For the longer spans (events spans or events+trigger spans) we perform CRF decoding.", "The models we used are AllenNLP (Gardner et al., 2018) implementations.", "Whenever we use the trigger or the event as features to the model, we add special tokens to the input, marking their respective spans in the sentence.", "The hyper-parameters of the models are as follows: we use ROBERTABASE and fine-tune it for 6 epochs with a batch-size of 8, a learning rate of 1 e 5 and the adam optimizer.", "9 5 Results Setting the Stage Before evaluating our models on the proposed tasks, we first assess the empirical challenge of our event-based modality detection task relative to the modal sense sentence classification (SC) setup of Marasovic and Frank (2016).", "Their work focuses on 6 modal auxiliary verbs ( can, could, may, must, should , and shall ) and modal senses from a restricted set of three labels ( deontic, dynamic, epistemic ).", "Note that their proposed setup is not designed to separate modal sentences from non-modal ones, as the Marasovic and Frank (2016) dataset contains only modal sentences.", "Second, it cannot directly indicate that a sentence contains multiple modal triggers with different senses.", "8 We also experimented with a PyTorch-based sequence tagging model (NCRF++ by Yang and Zhang (2018)) with GoogleNews-vectors-negative300 embeddings (https://code.google.com/archive/p/word2vec/), but this setting did not outperform our majority vote baseline (and certainly under-performed the model based on contextualized representations), and we didn't pursue this direction further.", "9 The code for data processing, configuration files and training are available at https://github.com/OnlpLab/ Modality .", "Table 6 : Precision, Recall, and F1 for Baseline and RoBERTa.", "In labeled the model tagged each token for modal/not modal and classified the identified modal tokens.", "In unlabeled the labels are given, but not counted beyond the modal/not-modal distinction.", "Table 7 : Replicating the Setup of Marasovic and Frank (2016) on the GME Data.", "Results drop for GME when using only sentences with modal verbs (MV), and even further when using all of GME's sentences (namely with all modal triggers).", "We trained and tested a CNN compatible to theirs 10 on their data as well as our data (GME), using their proposed settings.", "We mapped our Priority , Agent , and Knowledge to their deontic , dynamic , and epistemic , respectively, and ignored our State of the World (circumstantial) .", "Here, we report the same sentence-based accuracy metrics as they do.", "Table 7 shows the results on the two datasets, theirs and GME.", "We see that accuracy on the SC task drops when switching from their data to ours, and that it drops further when moving from a closed set of POS (Modal Verbs) to all targets.", "All in all, sentence classification is not sufficient to reflect the richness of event-based modality annotation, and we conjecture that the SC setup would be too restrictive for real-world applications.", "Modal Sense Classification Next we report results for the first task we define, labeling the modal sense of an oracle trigger, as shown in Table 4. The majority vote baseline is high, which is partly due to the trigger lemma overlap between train and dev/test (between 73%-79% depending on the split).", "Additionally we found only 25% of the trigger lemmas in the corpus to be ambiguous between Plausibility and Priority.", "Exposing the context, either by means of the full sentence or only the event head, improves results, and the improvement is more substantial for the fine-grained distinctions.", "Removing the lemma and using only context (Masked) harms the results, but it is still impressive 10 Some dependencies in the Marasovi c and Frank (2016) code are deprecated, so we use a simple off-the-shelf CNN model of AllenNLP (Gardner et al., 2018).", "and shows that the environment has non-negligible contribution to sense disambiguation.", "Finally, the sense classification is surprisingly effective also in cases where different modal events in the same sentence are intertwined.", "An interesting example is the following sentence, with modal triggers in bold (sense in brackets): \"How can (Plausibility), under such circumstances, America allow (Priority) itself to express an opinion (Plausibility) over the issue of human rights (Priority) in other countries.\"", "Even when masking the triggers, the fine-tuned language model was able to correctly identify this alternating pattern of Plausibility and Priority.", "Modal Triggers Detection Table 5 shows the modal trigger detection results when applied only to the six modal verbs ( MV s), as opposed to modal triggers of unrestricted POS tags (ALL ).", "We see that when targeting only MV s, detecting modal elements is almost trivial for both the baseline and RoBERTa.", "Both models are also quite proficient (F1=93) at separating the different high-level modal senses (Priority vs. Plausibility) of the modal types that we defined.", "Once we switch to All triggers', results substantially drop.", "Also, when switching to finer-grained categories we observe an expected drop for both the baseline and RoBERTa, with RoBERTa performing significantly better.", "Table 6 presents the breakdown of the scores, labeled and unlabeled, for the different levels of granularity by the different models.", "In all cases RoBERTa shows at least 5 absolute points consistent increase in F1 scores over the baseline, for all levels of granularity.", "Furthermore, our unlabeled scores demonstrate that predicting the fine-grained categories by RoBERTa actually helps to determine the modal/non-modal decision boundary, with an F1 improvement of about 1 absolute point at all levels.", "For the labeled accuracy, we observe an expected drop in the F1 scores when taking into account fine-grained labels.", "Yet, the performance is better than a majority vote baseline and is far better than chance for these nuanced distinctions.", "Table 8 : Modal Trigger Tagging Results, F1 on Detected Spans, with and without Event Head Information.", "In the Fine-Grained Labeled RoBERTa setting the breakdown of the F1 performance by label is: agent : 72.7, world : 54.7, rules/norms : 60.4, knowledge : 59.3, intentional : 46.1.", "These scores do not correlate with the frequency of each sense in the training data, e.g. agent is the least frequent sense, but the model performed best at tagging it.", "Looking at ambiguous lemmas, i.e., lemmas that can have different modal senses depending on context, one can see that agent and rules/norms are the least ambiguous senses, which explains their higher performance scores.", "Breaking down the performance by coarse grained POS tag shows that VERBS are easiest to tag (66.5), followed by ADVERBS (59.7), then ADJECTIVES (55.9) and lastly, NOUNS , which, with a score of 43.8, seem to be the hardest to tag.", "Interestingly, ADJECTIVES are more ambiguous than NOUNS ; we thus do not have a satisfying explanation for why it is harder to classify the modality of NOUN triggers.", "Table 8 shows the effect of event heads on modal trigger identification and classification, considering whether to model them separately or jointly in realistic scenarios, where the trigger is not known in advance.", "Gold event information as a feature for modal trigger tagging is helpful, but when this information is predicted, propagated errors decrease performance.", "Jointly predicting both triggers and event heads only very slightly decreases performance for the more fine-grained sense categories, making it a viable option for classification.", "Event Detection Based on Modal Triggers Table 9 shows that event-span detection is a harder task than merely locating the triggers (cf. Table 6).", "Interestingly, predicting the span given information about the trigger (Trigger as Feature) works better than predicting the span with no such information (No-trigger).", "This holds both when the triggering event is provided by an Oracle (Gold'), or whether it is predicted by RoBERTa (Predict').", "Improving modal trigger prediction is thus expected to further contribute to the accurate identification of events, and to event-span boundary detection.", "In general, F1 No Trigger Trigger Joint Trigger Gold Predict Joint Span Modal / Not 51.1 71.13 53.55 50.05 Coarse-Grained 51.1 70.91 53.56 49.85 Fine-Grained 51.1 70.38 53.09 48.24 Head Modal / Not 56.3 72.3 55.8 56.9 Coarse-Grained 56.3 71.6 56.0 60.7 Fine-Grained 56.3 70.9 55.2 55.3 Table 9 : Event Detection Results, F1 on Detected Spans, with and without Modal Trigger Information.", "head prediction shows better results than span prediction, partly due to the F1 score on spans being a restrictive metric in cases of partial overlap.", "Error Analysis To qualitatively assess the usability of RoBERTa's output, two trained human experts manually inspected the errors in 112 modal triggers in the dev set.", "Out of 36 false negatives (FN), 6 (16% of the FN) are in fact correct (incor-rectly tagged by the annotators as modal), and out of 27 false positives, 21 (78% of the FP) are in fact correct (modals missed by the annotators).", "This leads to the conclusion that the gold annotation by the experts, while being precise, has incomplete coverage and lower recall.", "It implies that RoBERTa's precision is in actuality higher , with a larger share of its predictions being correct.", "We propose an event-based modality detection task which is based on solid theoretical foundations yet is adapted to fit the needs of NLP practitioners.", "The task has three facets: modal triggers can be of any syntactic type, sense labels are drawn from a unified taxonomy we propose, and modal triggers are associated with their modified events.", "We propose this task and standard as a potential extension for standard semantic representations (AMR, SDG, UCCA, etc.) towards easy incorporation of modal events as features in downstream tasks.", "We thank Yoav Goldberg, Ido Dagan, Noah Smith, Graham Katz, Elena Herburger, and members of the BIU-NLP Seminar for thoughtful feedback and fruitful discussion.", "We also thank 3 anonymous reviewers for their insightful remarks.", "This research is supported by an ERC-StG grant of the European Research Council (no. 677352), the Israel Science Foundation (grant no. 1739/26 and grant no. 2299/19), and the National Science Foundation (BCS-1053038), for which we are grateful." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "method", "objective", "method", "abstain", "result", "result", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other" ]
[ "Historical text normalization, the task of mapping historical word forms to their modern counterparts, has recently attracted a lot of interest (Bollmann, 2019; Tang et al., 2018; Lusetti et al., 2018; Bollmann et al., 2018; Robertson and Goldwater, 2018; Bollmann et al., 2017; Korchagina, 2017).", "Yet, virtually all approaches suffer from the two limitations:", "1) They consider a fully supervised setup, often with impractically large manually normalized datasets;", "2) Normalization happens on words in isolation.", "By utilizing a simple generative normalization model and obtaining powerful contextualization from the target-side language model, we train accurate models with unlabeled historical data.", "In realistic training scenarios, our approach often leads to reduction in manually normalized data at the same accuracy levels.", "Text normalization is the task of mapping texts written in some non-standard variety of language L (a dialect or an earlier diachronic form) to some standardized form, typically the official modern standard variety of L (Table 1).", "Examples include the normalization of informal English-language tweets (Han and Baldwin, 2011); quasi-phonetic transcriptions of dialectal Swiss German (Samardzic et al., 2015); and historical documents such as religious texts in 15th-century Icelandic (Bollmann et al., 2011; Pettersson et al., 2013b; Ljubesic et al., 2016, inter alia ).", "Text can to a large extent be normalized by replacing non-standard words with their standard counterparts.", "Because of this often-made as-sumption, this task is also known as lexical or spelling normalization (Han and Baldwin, 2011; Tang et al., 2018).", "There has been a lot of interest in historical and dialectal text normalization over the past years.", "Earlier works attempt type-level normalization by way of search for standardized words (Pettersson et al., 2013a; Bollmann, 2012).", "More recently, the dominant approach casts the problem as probabilistic type-level character transduction.", "Most commonly, a fully-supervised machine translation system transduces words in isolation (Bollmann, 2019).", "The use of context is limited to employing a target-side language model for an improved, contextualized decoding (Ljubesic et al., 2016; Etxeberria et al., 2016; Jurish, 2010).", "In this paper, we develop simple approaches to semi-supervised contextualized text normalization.", "On the example of historical text normalization, we show that one can reduce the amount of supervision by leveraging unlabeled historical text and utilizing context at training.", "Our methods build on familiar techniques for semi-supervised learning such as generative modeling and expectationmaximization and unify previous work (search, noisy channel, contextualized decoding, neural character-level transduction) in a simple setup.", "We experimentally validate the strength of our models on a suite of historical datasets.", "In addition to the token-level supervision scenario, we show benefits of a more economic supervision by a word-type normalization dictionary.", "Most normalization approaches attempt to learn a function from historical to modern word types without taking context into consideration.", "This is based on the observation that morpho-syntactic differences between the non-standard and standard varieties (e.g. in word order, grammatical case distinctions) are negligible and normalization ambi-wermutt s a t in die ohren getrop t t e odtet die w e urme darinnen wermutsaft in die ohren getropft totet die wurmer darin wormwood juice in the ears dripped kills the worms inside Table 1: Historical text normalization.", "guity is often not very high.", "Some earlier works cast text normalization as search over standardized word forms (Pettersson et al., 2013a; Bollmann, 2012).", "Hand-crafted rules or a string-distance metric (Levenshtein, 1966) with parameters estimated from the labeled data are used to retrieve best matching standard candidates.", "Another line of work follows a principled probabilistic solution: a noisy channel model (Shan-non, 1948), which consists of a channel p ( x | y ) and a language model p ( y ) (Jurish, 2010; Pettersson et al., 2013b; Samardzic et al., 2015; Etxeberria et al., 2016; Ljube si c et al., 2016; Scherrer and Ljubesic, 2016).", "The channel model operates at the character level and takes the form of either a character alignment model (Brown et al., 1993) or a weighted finite-state transducer (WFST, Mohri, 1997).", "Channel model parameters are estimated from a manually normalized corpus.", "The language model is often trained on external target-side data.", "Some works perform normalization of words in context.", "Jurish (2010) and Etxeberria et al. (2016) decode sequences of historical word tokens by combining a character-level channel with a word-level language model p ( y 1: m ) .", "Scherrer and Ljubesic (2016) learn a character alignment model directly over untokenized segments of historical texts.", "Numerous neural approaches to text normalization (Tang et al., 2018; Lusetti et al., 2018; Bollmann et al., 2018; Robertson and Goldwater, 2018; Bollmann et al., 2017; Korchagina, 2017) learn a discriminative model p ( y | x ) parameterized with some generic encoder-decoder neural networkthat performs the traditional character-level transduction of isolated words.", "The models are trained in a supervised fashion on a lot of manually labeled data.", "For example, Tang et al. (2018) train on tens of thousands of labeled pairs, including for varieties that share more than 75% of their vocabularies.", "Except Lusetti et al. (2018), who use a target-side language model to rerank base model hypotheses in context, no other approach in this group uses context in any way.", "If non-standard language exhibits normalization ambiguity, one would expect contextualization to reduce it.", "For example, historical German de s z in the RIDGES corpus (Odebrecht et al., 2017) normalizes to three modern word types: das, des (various forms of the definite article), and dessen (relative pronoun whose ).", "Knowing the context (e.g. whether the historical word occurs clause-initially or before a neuter noun) would help normalize de s z correctly.", "As suggested by Ljubesic et al. (2016), the accuracy of the oracle that normalizes words in isolation by always selecting their most frequent normalization upper-bounds the accuracy of non-contextual systems.", "Many historical normalization corpora do not have high normalization ambiguity (Table 3).", "The upper bound on accuracy for non-contextual normalization is 97.0 on average ( 0 . 02 ) and is above 92.4 for every historical language that we study here, indicating that lexical normalization is a very reasonable strategy.", "Even if context may sometimes not be necessary for adequately solving the task in a fully supervised manner, we would expect contextualization to lead to more accurate unsupervised and semi-supervised generative models.", "We start off with a generative model in the form of a noisy channel over sequences of words (Eq. 1).", "The channel model factorizes over non-standard words, and a non-standard word x i depends only on the corresponding standardized word y i .", "The simple structure of our model follows from the lexical normalization assumption.", "Compared to a discriminative model, which would directly capture the mapping from nonstandard", "nonstandard word sequences x 1: m to standardized y 1: m without having to account for how nonstandard data arise, this model offers some important advantages.", "First, it can be trained by maximizing marginal likelihood p ( x 1: m ) , which leads to semi-supervised learning.", "Second, we can use a language model estimated from arbitrary external text.", "The only model parameters are the parameters of the channel model p ( x i | y i ) .", "The parameters of the language model p ( y 1: m ) are held fixed.", "The channel model p ( x i | y i ) stochastically maps standardized words to non-standard words.", "Any type-level normalization model from 2 can be applied here (in the reverse direction from the normalization task).", "For our experiments, we use the neural transducer of Makarov and Clematide (2018) as it has been shown to perform strongly on morphological character transduction tasks.", "Parameterized with a recurrent encoder and decoder, the model defines a conditional distribution over edits p ( x , a | y ) = (cid:81) | a | j =1 p ( a j | a 1: j 1 , y ) , where y = y 1 , . . . , y | y | is a standardized word as a character sequence and a = a 1 . . . a | a | an edit action sequence.", "Using this model as a channel requires computing the marginal likelihood p ( x | y ) , which is intractable due to the recurrent decoder.", "We approximate p ( x | y ) by p ( x , a | y ) , where a is a minimum cost edit action sequence from y to x .", "This works well in practice as the network produces a highly peaked distribution with most probability mass placed on minimum cost edit action sequences.", "We consider two language model factorizations, which lead to different learning approaches.", "Neural HMM.", "If the language model is an n gram language model p ( y 1: m ) m +1 (cid:89) i =1 p ( y i | y i n +1: i 1 ) , (2) the overall generative model has the form of an n -gram Hidden Markov Model (HMM) with transition probabilities given by the language model and emission probabilities by the channel.", "HMM has been proposed for this problem before but with different parameterizations (Jurish, 2010; Etxeberria et al., 2016).", "For simplicity, we use count-based language models in the experiments.", "Full neural parametarization can be achieved with n gram feedforward neural language models (Ben-gio et al., 2003).", "RNN LM-based model.", "Our second language model is a word-level recurrent neural language model (RRN-LM, Mikolov, 2012).", "It does not make any independence assumptions, which increases expressive power yet precludes exact inference in the generative model.", "Let U be a set of unlabeled non-standard sentences, V x the set of non-standard word types in U , and V y the vocabulary of the standardized variety.", "In the unsupervised case, we train by maximizing the marginal likelihood of U with respect to the channel parameters : LU ( U, ) = (cid:88) x 1: m U log (cid:88) y 1: m V m y p ( x 1: m , y 1: m ) (3) For an n -gram neural HMM, this can be solved using generalized expectationmaximization (GEM, Neal and Hinton, 1998; Berg-Kirkpatrick et al., 2010).", "We compute the E-step with the forward backward algorithm.", "In the M-step, given the posterior p ( y | x ) for each non-standard word type x , we maximize the following objective with respect to with a variant of stochastic gradient ascent: LM ( U, ) = (cid:88) x V x (cid:88) y V y p ( y | x ) log p ( x | y ) (4) GEM provably increases the marginal likelihood.", "We train the RNN LM-based model with hard expectationmaximization (hard EM, Sam-dani et al., 2012).", "This is a simple alternative to approximate inference.", "Hard EM does not guarantee to increase the marginal likelihood, but often works in practice (Spitkovsky et al., 2010).", "The difference from GEM is the E-step.", "To compute it, we decode U with beam search.", "Let B = (cid:83) x 1: m U { y 1: m V m y | y 1: m is in the beam for x 1: m } .", "We set the posterior p ( Y = y | X = x ) to be proportional to the sum of the probabilities of sentence-wise normalizations from B where x gets normalized as y .", "Semi-supervised training.", "We linearly combine the maximum likelihood (MLE) of the set S = { ( x , y ) i } ni =1 of labeled normalization pairs with the marginal likelihood of U (Eq. 3): L ( S, U, ) = (cid:88) ( x , y ) S log p ( x | y ) + L U ( U, ) (5) 0 controls how much information from U flows into the parameter update.", "The difference from the unsupervised case is that the M-step computes Eq.", "4 scaled with and the MLE of S .", "In practice, we initially pretrain the channel on the labeled data and then move to full semi-supervised training with some non-zero fixed for the rest of training.", "Candidate set heuristic.", "Performing EM with the full modern vocabulary V y as the set of possible normalization candidates is vastly impractical: The forwardbackward algorithm runs in O ( m | V y | n ) time.", "In related tasks, this has lead to training heuristics such as iterative EM (Ravi and Knight, 2011).", "To keep this computation manageable, we propose generating a candidate set C ( x ) of k modern words for each non-standard word x .", "To this end, we use approximate nearest neighbor search with edit distance (Hulden, 2009).", "The algorithm efficiently searches through an FST, which encodes a part of the vocabulary, with the A search.", "We encode different word frequency bands of the vocabulary as separate FSTs, which we search in parallel.", "We rerank the candidates taking into account non-standard and standardized words' relative frequencies (see Appendix).", "Thus, all summations and maximizations over V y are performed over the reduced set C ( x ) .", "Our heuristic embodies normalization by search ( 2) and could be replaced with a more informed search and reranking algorithm (Bollmann, 2012; Baron and Rayson, 2008).", "candidate set heuristic is too restrictive.", "It is hard to achieve perfect coverage at manageable candidate set sizes (e.g. if x and target y have no characters in common as e.g. historical German s y (cid:55) sie ( they )).", "Worse still, this approach completely fails if the target y does not appear in the corpus.", "This could be because the corpus is small (e.g. most Wikipedias); rich morphology Algorithm 1 GEM training ( 4.4) Full version uses restarts and candidate pruning (see Appendix).", "Input: Unlabeled dataset U , labeled dataset S , development set, number of modern candidates k to generate, number of EM epochs K , mixture parameter combining the unsupervised and supervised objectives.", "1: Compute k candidates C ( x ) for each non-standard word type x V x from U (by either method in 4.5).", "2: Randomly initialize channel parameters (0) .", "3: if labeled dataset S (cid:54) = then pretrain (0) on S .", "4: for epoch t 1 to K do 5: E-step: 6: Q 0 k | V x | 7: Compute channel scores p ( t 1) ( x | y ) for all x V x and y C ( x ) .", "(Use uniform scores if t = 1 and S = .) 8: for non-standard word sequence x 1: m U do 9: Run forwardbackward or beam search ( 4.4) to compute each word's posterior p ( Y i | x 1: m ) .", "10: for position i 1 to m do 11: Q ( y , x i ) Q ( y , x i ) + p ( Y i = y | x 1: m ) for all y C ( x i ) .", "12: Normalize: p ( y | x ) Q ( y , x ) / (cid:80) y Q ( y , x ) for all x V x and y C ( x ) .", "13: M-step: 14: Start training from ( t 1) and use p ( y | x ) in unsupervised objective LM ( U, ) (Eq. 4): 15: ( t ) argmax (cid:88) ( x , y ) S log p ( x | y ) + L M ( U, ) 16: return ( t ) leading to best accuracy on development set or orthographic conventions lead to a vast number of word types (e.g. Hungarian); or the target word is not even attested in the standardized variety (e.g. czuhant (cid:55) zehant ( immediately ) in the Anselm historical German corpus (Krasselt et al., 2015)).", "We, therefore, also consider candidate generation with a direct model q ( y | x ) .", "We bootstrap a direct model from a contextualized generative model.", "We fit it by minimizing the cross-entropy of the direct model relative to the posterior of the generative model p ( y | x ) .", "For a semi-supervised generative model, this combines with the MLE of the labeled set S ( 0 ): L ( S, U, ) = (cid:88) ( x , y ) S log q ( y | x ) + (cid:88) x V x (cid:88) y V y p ( y | x ) log q ( y | x ) , (6) Any type-level normalization model from prior work could be used here.", "We choose the direct model to be a neural transducer, like the channel.", "It generates candidates using beam search.", "We consider two ways of sentence-wise decoding with our generative models.", "The first uses the maximum a posteriori (MAP) decision rule, which finds a normalization that maximizes the posterior p ( y 1: m | x 1: m ) .", "Depending on the factorization of the language model, we solve this exactly (with the Viterbi algorithm) or approximately (with beam search).", "The other approach is to learn a reranker model on the development set.", "The model rescores sentence hypotheses y 1: m generated by the base model (with k-best Viterbi or beam search).", "It uses rich non-local featuresthe hypothesis' scores under a wordand character-level RNN language modelsas well as length-normalized base model score, mean out-of-vocabulary rate and edit distance from x 1: m (see Appendix).", "We implement a PRO reranker (Hopkins and May, 2011) that uses hamming loss.", "For our experiments, we use eight datasets compiled by various researchers (Pettersson, 2016; Ljubesic et al., 2016; Bollmann, 2018) from historical corpora (Tables 2 and 3).", "1 Seven languages are Indo-European: Germanic (English, German, Icelandic, and Swedish), Romance (Portuguese and Spanish), and Slavic (Slovene).", "Additionally, we experiment with Hungarian, a Finno-Ugric language.", "From the Slovene dataset, we only use the collection of the older and more challenging texts in the Bohoric alphabet.", "The data are of different genres (letters, religious and scientific writings).", "The earliest texts are in 14th-c.", "Middle English.", "In many datasets, the proportion of identity normalizations is substantial.", "The smallest word overlap is in the Hungarian data (18%), the largest is in English (75%).", "All corpora are tokenized and aligned at the segment and token level.", "For some datasets, either segments do not coincide with grammatical sentences, or the data have no segment boundaries at all (e.g. Hungarian or Icelandic).", "In such cases, to make input amenable to training with context, we resort to sentence splitting on punctuation marks.", "1 The datasets are featured in the large-scale study of Bollmann (2019), who conveniently provides most data in a unified format at https://github.com/coastalcph/ histnorm/ .", "Algorithm 2 MAP decoding or reranking ( 4.6) Input: Non-standard word sequence x 1: m , number of modern candidates c to generate, number of sentence hypotheses k to generate.", "We also split very long segments to ensure the maximum segment length of fifty words.", "Token alignment is largely one-to-one, with rare exceptions.", "Clitization and set phrases (e.g. German mu u (cid:55) musst du ( you must ), aller handt (cid:55) allerhand ( every kind of )) are common causes for many-to-one alignments, which our models fail to capture properly.", "State-of-the-art.", "We compare our models to the strongest models for historical text normalization: the Norma tool (Bollmann, 2012), which implements search over standardized candidates; and the character-level statistical machine translation model (cSMT, Ljubesic et al., 2016), which uses the Moses toolkit (Koehn et al., 2007).", "This approach estimates a character n -gram language model on external data and fits a MERT reranker model (Och, 2003) on the development set.", "According to Bollmann (2019), Norma performs best in the low-resource setting ( 500 labeled to-kens), and cSMT should be preferred in all other data conditions.", "Norma's strong performance in the low-resource scenario derives from the fact that searching for candidates can be fairly easy for some languages e.g. English.", "The reranker trained on the development set is key to cSMT's strength.", "Realistic low-resource setting.", "Our contextualized models are particularly appealing when labeled data are limited to at most a couple of thousand annotated word pairs.", "This would be the most common application scenario in practice, and approaches requiring tens of thousands of training period train dev reference de 14821652 41.9 9.7 Odebrecht et al. (2017) en 13861698 147.8 16.3 Markus (1999) es 15th19th c.", "samples would be ruled out as unrealistic.", "We, therefore, experiment with small labeled training set sizes n ranging from 500 to 5 K .", "Additionally, we consider the unsupervised scenario ( n = 0 ), which might be less relevant practically (even a small amount of labeled data might lead to substantial improvement) but allows us to demonstrate most directly the advantage of our approach.", "To keep the experiments close to the real-life application scenario (Kann et al., 2019), we additionally cap the size of the development set at 2 K tokens.", "Otherwise, we require that the development set have 500 tokens more than the labeled set S to ensure that we validate on not too small a number of word types (e.g. at 1 K tokens, we get only about 600 word types on average).", "Finally, the unlabeled set U comprises nonstandard word sequences from all the remaining non-test data.", "Our sampled development sets are much smaller compared to the original development set from the official data split.", "Not to waste any data, we also include the historical part of the rest of the original development set into the unlabeled set U .", "The labeled training set S is sampled uniformly at random from U with targets.", "Semi-supervised training with type-level normalization dictionary.", "Supervision by type-level dictionary (as opposed to token-level annotations) is a simple and effective way of reducing the amount of manually labeled data (Garrette et al., 2013).", "We simulate type-level normalization dictionary construction by selecting d most frequent non-standard word types from the original training set.", "We build a labeled set S by pairing them with the most frequent standard word types that they normalize to.", "We experiment on German and Slovene.", "We use a development set of 500 tokens.", "heuristic.", "For the neural HMM, we fit count-based bigram language models using KenLM (Heafield et al., 2013).", "All RNN language models are parameterized with a Long Short-Term Memory cell (Hochreiter and Schmidhuber, 1997) and use dropout regularization (Zaremba et al., 2014).", "The HMMs use C ( x ) of 150 candidates, the RNN LM-based models use 50 candidates.", "We set the beam size of the RNN LM-based models to four for both final decoding and the E-step.", "For reranking, the base HMMs output 150 k-best sentence hypotheses and the RNN LM-based models output the beam.", "The reranker models are trained with the perceptron algorithm.", "The direct models are trained with AdaDelta.", "We decode them with beam search and rerank the beam with a PRO reranker using the channel and direct model scores and relative frequency as features.", "We use the top two reranked candidates as the new candidate set.", "We refer the reader to the Appendix for further details on training.", "We train Norma and cSMT on our data splits using the training settings suggested by the authors.", "The semi-supervised contextualized approach results in consistent improvements for most languages and labeled data sizes (Tables 4 and 5).", "Compared to cSMT, an average error reduction ranges from 19% ( n = 500 ) to almost 3% ( n = 5 K ) or 8% excluding Hungarian, the language on which the models perform worst.", "Reranking provides an important boost (almost 5% error reduction compared to the base model, and almost 8% for neural HMMs), and bootstrapping direct model candidates results in even better performance (almost 14% error reduction).", "Unsupervised case.", "Remarkably, with no labeled training data (and only a 500-token labeled development set), the best configuration achieves 88.4% of the top scores reported for fully supervised models (Table 2 of Bollmann (2019)).", "It outperforms the Norma baseline trained on n = 1 K labeled samples, reducing its error by almost 4%.", "see strong performance for languages where the unlabeled dataset U is large ( official training and development sets together, Table 2).", "This includes English, that shows little ambiguity (Ta-ble 3) and so would be expected to profit less from contextualization.", "Effects of the modern corpus and preprocessing.", "The size and coverage of the Wikipedia dump (Table 3) for Icelandic and particularly Hungarian degrade the models' performance and are likely the key reason why cSMT outperforms all contextual models for Hungarian as the labeled dataset increases ( n = 2 . 5 K and n = 5 K ), despite the large amount of unlabeled Hungarian text.", "The RNN LM-based models are hit worst due to the poorest coverage.", "The lack of original segment boundaries (Table 3, Icelandic is only partially segmented) further exacerbates performance.", "Remarkably, the overall approach works despite language models and candidate sets using out-of-domain standardized data.", "Leveraging in-domain data such as collections of literary works from the time period of the source historical text could lead to even better performance (Berg-Kirkpatrick de en es hu is pt sl sv avg avg \\ hu identity 44.36 75.29 73.40 17.53 47.62 65.19 40.74 58.59 52.84 57.87 best supervised 88.22 95.24 95.02 91.70 87.31 95.18 93.30 91.13 92.14 92.19 n = 2 .", "Candidate generation with direct model.", "Generating candidates with the direct model leads to large gains for languages with poor coverage (Ice-landic and Hungarian RNN LM-based models see an average error reduction of over 20% and 14% respectively).", "At larger labeled dataset sizes (Ta-ble 5), bootstrapping a direct model and reranking its output without context becomes an effective strategy (Icelandic, Portuguese).", "Normalization ambiguity.", "We would expect languages with higher normalization ambiguity to profit from contextualization (Ljubesic et al., 2016).", "German, Portuguese, and Spanish gain even in the most competitive semi-supervised 5 K condition, consistent with the amount of ambiguity they exhibit (Table 3).", "Losses and modest gains are observed for languages with the lowest ambiguity rates (Slovene, Swedish).", "We look at the accuracies on unambiguous and ambiguous normalizations (Figure 1).", "The contextual model consistently outperforms cSMT on ambiguous tokens, often by a wide margin and even when cSMT is better overall (Slovene).", "An extreme case is German at n = 5 K , where the two approaches perform similarly on unambiguous tokens, yet cSMT suffers considerably on ambiguous ones (38% error reduction by the neural HMM).", "German ranks second by normalization ambiguity (Table 3).", "Type-level normalization dictionary.", "We observe gains equivalent to using a token-level dataset of at least double the dictionary size (Ta-ble 6).", "Slovene profits a lot from dictionary supervision, with 1 K -type model performing close to the 5 K -token model.", "Shortcomings of the approach.", "The general problem of our approach, as well as most approaches that we build on, is reliance on gold tokenization.", "Overall, we have faced minor issues with tokenization (one notable example is Swedish where 0.6% of the target-side test data are words with a colon for which we fail to retrieve candidates from Wikipedia).", "Tokenization remains a challenge for normalization of unpreprocessed non-standard data (Berg-Kirkpatrick et al., 2013).", "Clearly, one can simultaneously use both methods of candidate generation ( 4.5).", "We leave it for future work to verify whether this leads to an improved performance.", "Computing the posterior p ( y | x ) in both generative models is hard, which is why we are forced to reduce the number of admissible candidates y and, in the case of the RNN LM-based model, approximate the posterior with maximization.", "This problem can be addressed in a principled way by using variational inference (Jordan et al., 1999), a framework for approximate inference that deals with intractable distributions.", "We leave it for future work to validate its effectiveness for this problem.", "As noted earlier, it is a simplification to assume that non-standard text is tokenized.", "Being able to normalize across token boundaries (by merging multiple non-standard tokens or splitting one into multiple standardized ones) is crucial for tackling real-world text normalization tasks and related problems such as optical character recognition error correction.", "An appealing direction for future work would be developing a joint model for text tokenization and normalization.", "One family of latent-variable models that would be suitable for this task are segmental recurrent neural networks (SRNNs, Kong et al., 2016).", "SRNNs explicitly model input segmentation and have been successfully applied to online handwriting recognition, Chinese word segmentation, joint word segmentation and part-of-speech tagging (Kong et al., 2016; Kawakami et al., 2019).", "This paper proposes semi-supervised contextual normalization of non-standard text.", "We focus on historical data, which has gained attention in the digital humanities community over the past years.", "We develop simple contextualized generative neural models that we train with expectation maximization.", "By leveraging unlabeled data and accessing context at training time, we train accurate models with fewer manually normalized training samples.", "No labeled training data are necessary to achieve 88.4% of the best published performance that uses full training sets.", "Strong gains are observed for most of the considered languages across realistic low-resource settings (up to 5 K labeled training tokens).", "The techniques developed here readily apply to other types of normalization data (e.g. informal, dialectal).", "We will make our implementation publicly available.", "2 Acknowledgments We thank the reviewers for helpful comments and Eva Pettersson and Manfred Markus for their help with the English dataset.", "This work has been supported by the Swiss National Science Foundation under grant CR-SII5 173719." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "We propose approaches to Quality Estimation (QE) for Machine Translation that explore both text and visual modalities for Multimodal QE.", "We compare various multimodality integration and fusion strategies.", "For both sentence-level and document-level predictions, we show that state-of-the-art neural and feature-based QE frameworks obtain better results when using the additional modality.", "Quality Estimation (QE) for Machine Translation (MT) (Blatz et al., 2004; Specia et al., 2009) aims to predict the quality of a machine-translated text without using reference translations.", "It estimates a label (a category, such as good' or bad', or a numerical score) for a translation, given text in a source language and its machine translation in a target language (Specia et al., 2018b).", "QE can operate at different linguistic levels, including sentence and document levels.", "Sentence-level QE estimates the translation quality of a whole sentence, while document-level QE predicts the translation quality of an entire document, even though in practice in literature the documents have been limited to a small set of 3-5 sentences (Specia et al., 2018b).", "Existing work has only explored textual context.", "We posit that to judge (or estimate) the quality of a translated text, additional context is paramount.", "Sentences or short documents taken out of context may lack information on the correct translation of certain (esp. ambiguous) constructions.", "Inspired by recent work on multimodal machine learning (Baltrusaitis et al., 2019; Barrault et al., 2018), we propose to explore the visual modality in addition to the text modality for this task.", "creasingly accompanied with visual elements such as images or videos, especially in social media but also in domains such as e-commerce.", "Multimodality has not yet been applied to QE.", "Table 1 shows an example from our e-commerce dataset in which multimodality could help to improve QE.", "Here, the English noun shorts is translated by the adjective court (for the adjective short ) in French, which is a possible translation out of context.", "However, as the corresponding product image shows, this product is an item of clothing, and thus the machine translation is incorrect.", "External information can hence help identify mismatches between translations which are difficult to find within the text.", "Progress in QE is mostly benchmarked as part of the Conference on Machine Translation (WMT) Shared Task on QE.", "This paper is based on data from the WMT'18 edition's Task 4 document-level QE.", "This Task 4 aims to predict a translation quality score for short documents based on the number and the severity of translation errors at the word level (Specia et al., 2018a).", "This data was chosen as it is the only one for which meta information (images in this case) is available.", "We extend this dataset by computing scores for each sentence for a sentence-level prediction task.", "We consider both feature-based and neural state-of-the-art models for QE.", "Having these as our starting points, we propose different ways to integrate the visual modality.", "The main contributions of this paper are as follows:", "(i) we introduce the task of Multimodal QE (MQE) for MT as an attempt to improve QE by using external sources of information, namely images;", "(ii) we propose several ways of incorporating visual information in neural-based and feature-based QE architectures; and", "(iii) we achieve the state-of-the-art performance for such architectures in document and sentence-level QE.", "QuEst++: QuEst++ (Specia et al., 2015) is a feature-based QE framework composed of two modules: a feature extractor module, to extract the relevant QE features from both the source sentences and their translations, and a machine learning module.", "We only use this framework for our experiments on document-level QE, since it does not perform well enough for sentence-level prediction.", "We use the same model (Support Vector Regression), hyperparameters and feature settings as the baseline model for the document-level QE task at WMT'18.", "deepQuest: deepQuest (Ive et al., 2018) is a neural-based framework that provides state-of-the-art models for multi-level QE.", "We use the BiRNN model, a light-weight architecture which can be trained at either sentence or document level.", "The BiRNN model uses an encoder-decoder architecture: it takes on its input both the source sentence and its translation which are encoded separately by two independent bi-directional Recurrent Neural Networks (RNNs).", "The two resulting sentence representations are then concatenated as a weighted sum of their word vectors, generated by an attention mechanism.", "For sentence-level predictions, the weighted representation of the two input sentences is passed through a dense layer with sigmoid activation to generate the quality estimates.", "For document-level predictions, the final representation of a document is generated by a second attention mechanism, as the weighted sum of the weighted sentence-level representations of all the sentences within the document.", "The resulting document-level representation is then passed through a dense layer with sigmoid activation to generate the quality estimates.", "Additionally, we propose and experiment with BERT-BiRNN, a variant of the BiRNN model.", "Rather than training the token embeddings with the task at hand, we use large-scale pre-trained token-level representations from the multilingual cased base BERT model (Devlin et al., 2019).", "During training, the BERT model is fine-tuned by unfreezing the weights of the last four hidden layers along with the token embedding layer.", "This performs comparably to the state-of-the-art predictor-estimator neural model in Kepler et al. (2019).", "WMT'18 QE Task 4 data: This dataset was created for the document-level track.", "It contains a sample of products from the Amazon Reviews Dataset (McAuley et al., 2015) taken from the Sports & Outdoors category.", "Documents' consist of the English product title and its description, its French machine-translation and a numerical score to predict, namely the MQM score (Multidimensional Quality Metrics) (Lommel et al., 2014).", "This score is computed by annotating and weighting each word-level translation error according to its severity (minor, major and critical): MQM Score = 1 n min + 5 n maj + 10 n cri n where n is the total number of words, and n i is the number of errors annotated with the corresponding error severity.", "Additionally, the dataset provides one picture per product, as well as pre-extracted visual features, as we discuss below.", "For the sentence-level QE task, each document of the dataset was split into sentences (lines), where every sentence has its corresponding MQM score computed in the same way as for the document.", "We note that this variant is different from the official sentence-level track at WMT since for that task visual information is not available.", "Text features: For the feature-based approach, we extract the same 15 features as those for the baseline of WMT'18 at document level.", "For the neural-based approaches, text features are either the learned word embeddings (BiRNN) or pre-trained word embeddings (BERT-BiRNN).", "Visual features: The visual features are pre-extracted vectors with 4,096 dimensions, also provided in the Amazon Reviews Dataset (McAuley et al., 2015).", "The method to obtain the features uses a deep convolutional neural network which has been pre-trained on the ImageNet dataset for image classification (Deng et al., 2009).", "The visual features extracted represent a vectorial summary of the image taken from the last pooled layer of the network.", "He and McAuley (2016) have shown that this representation contains useful visual features for a number of tasks.", "We propose different ways to integrate visual features in our two monomodal QE approaches (Sec-tions 3.1 and 3.2).", "We compare each proposed model with its monomodal QE counterpart as baseline, both using the same hyperparameters.", "The feature-based textual features contain 15 numerical scores, while the visual feature vector contains 4,096 dimensions.", "To avoid over-weighting the visual features, we reduce their dimensionality using Principal Component Analysis (PCA).", "We consider up to 15 principal components in order to keep a balance between the visual features and the 15 text features from QuEst++.", "We choose the final number of principal components to keep according to the explained variance with the PCA, so this number is treated as a hyperparameter.", "After analysing the explained variance for up to 15 kept principal components (see Figure 4 in Appendix), we selected six numbers of principal components to train QE models with (1, 2, 3, 5, 10, and 15).", "As fusion strategy, we concatenate the two feature vectors.", "Multimodality is achieved with two changes in our monomodal models: multimodality integration ( where to integrate the visual features in the ar-chitecture), and fusion strategy ( how to fuse the visual and textual features).", "We propose the following places to integrate the visual feature vector into the BiRNN architecture: embed the visual feature vector is used after the word embedding layer; annot the visual feature vector is used after the encoding of the two input sentences by the two bi-directional RNNs; Figure 1: High-level representation of the document-level BiRNN architecture which illustrates how the visual features are integrated into the model.", "To fuse the visual and text features, we reduce the size of the visual features using a dense layer with a ReLu activation and reshape it to match the shape of the text-feature vector.", "As fusion strategies between visual and textual feature vectors, we propose the following: conc concatenation with both source and target word representations for the embed' strategy; concatenation with the text features for the last' strategy; mult element-wise multiplication for the target word representations and concatenation for the source word representations for the embed' strategy; element-wise multiplication with the text features for the annot' and last' strategies; mult2 element-wise multiplication for both source and target word representations (exclu-sive to the embed' model).", "Figure 1 presents the high-level architecture of the document-level BiRNN model, with the various multimodality integration and fusion approaches.", "For example, in the embed' setting, the visual features are fused with each word representation from the embedding layers.", "Since this strategy modifies the embedding for each word, it can be expected to have a bigger impact on the result.", "We use the standard training, development and test datasets from the WMT'18 Task 4 track.", "For feature-based systems, we follow the built-in cross-validation in QuEst++, and train a single model with the hyperparameters found by cross-validation.", "For neural-based models, we use early-stopping with a patience of 10 to avoid over-fitting, and all reported figures are averaged over 5 runs corresponding to different seeds.", "We follow the evaluation method of the WMT QE tasks: Pearson's r correlation as the main metric (Graham, 2015), Mean-Absolute Error (MAE) and Root-Mean-Squared Error (RMSE) as secondary metrics.", "For statistical significance on Pearson's r , we compute Williams test (Williams, 1959) as suggested by Graham and Baldwin (2014).", "For all neural-based models, we experiment with the all three integration strategies (embed', annot' and last') and all three fusion strategies (conc', mult' and mult2') presented in Section 3.2.", "This leads to 6 multimodal models for each BiRNN and BERT-BiRNN.", "In Tables 2 and 4, as well as in Figures 2 and 3, we report the top three performing models.", "We refer the reader to the Appendix for the full set of results.", "The first part of Table 2 presents the results for sentence-level multimodal QE with BiRNN.", "The best model is BiRNN+Vis-embed-mult2, achieving a Pearson's r of 0.535, significantly outperforming the baseline ( p -value < 0.01).", "Visual features can, therefore, help to improve the performance of sentence-level neural-based QE systems significantly.", "Figure 2 presents the result of Williams significance test for BiRNN model variants.", "It is a correlation matrix that can be read as follows: the value in cell ( i , j ) is the p -value of Williams test for the change in performance of the model at row i compared to the model at column j (Graham, 2015).", "With the pre-trained token-level representations from BERT (second half of Table 2), the best model is BERT-BiRNN+Vis-annot-mult, achieving a Pear-Pearson MAE RMSE BiRNN 0.504 0.539 0.754 +Vis-last-conc 0.483 0.531 0.746 +Vis-embed-mult 0.473 0.534 0.753 +Vis-embed-mult2 0.535 0.569 0.792 BERT-BiRNN 0.590 0.455 0.659 +Vis-annot-mult 0.602 0.454 0.654 +Vis-embed-conc 0.576 0.474 0.694 +Vis-embed-mult 0.598 0.486 0.686 Table 2: Pearson correlation at sentence-level on the WMT'18 dataset.", "son's r of 0.602.", "This shows that even when using better word presentations, the visual features help to get further (albeit modest) improvements.", "Table 3 shows an example of predicted scores at the sentence-level for the baseline model (BiRNN) and for the best multimodal BiRNN model (BiRNN+Vis-embed-mult2).", "The multimodal model has predicted a closer score (-0.002) to the gold MQM score (0.167) than the baseline model (-0.248).", "The French translation is poor ( cumulative-split is, for instance, not translated) as the low gold MQM score shows.", "However, the (main) word stopwatch is correctly translated as chronom ` etre in French.", "Since the associated picture indeed represents a stopwatch, one explanation for this improvement could be that the multimodal model may have rewarded this correct and important part of the translation.", "Table 4 presents the results for the document-level feature-based and BiRNN neural QE models.", "1 The first section shows the official models from the WMT'18 QE Task 4 report (Specia et al., 2018a).", "The neural-based approach SHEF-PT is the winning submission, outperforming another neural-based approach (SHEF-mtl-bRNN).", "For our BiRNN models (second section), BiRNN+Vis-embed-conc performs only slightly better than the monomodal baseline.", "For the feature-based models (third section), on the other hand, the baseline monomodal QuEst++ is outperformed by various multimodal variants by a large margin, with the one with two principal components (QuEst+Vis-2) performing the best.", "The more PCA components kept, the worse the results (see Appendix for full set of results).", "Figure 3 shows the Williams significance test for document-level QuEst++ on the WMT'18 dataset.", "As we can see, QuEst+Vis-2 model outperforms the baseline with p -value = 0.002.", "Thus, visual features significantly improve the performance of feature-based QE systems compared to the monomodal QE counterparts.", "We introduced Multimodal Quality Estimation for Machine Translation, where an external modality visual information is incorporated to feature-based and neural-based QE approaches, on sentence and document levels.", "The use of visual features extracted from images has led to significant improvements in the results of state-of-the-art QE approaches, especially at sentence level.", "The version of deepQuest for multimodal QE and scripts to convert document into sentence-level data are available on https://github.com/ sheffieldnlp/deepQuest .", "This work was supported by funding from both the Bergamot project (EU H2020 Grant No. 825303) and the MultiMT project (EU H2020 ERC Starting Grant No. 678017)." ]
[ "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "method", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other" ]
[ "Predicting the persuasiveness of arguments has applications as diverse as writing assistance, essay scoring, and advertising.", "While clearly relevant to the task, the personal characteristics of an argument's source and audience have not yet been fully exploited toward automated persuasiveness prediction.", "In this paper, we model debaters' prior beliefs , interests , and personality traits based on their previous activity, without dependence on explicit user profiles or questionnaires.", "Using a dataset of over 60,000 argumentative discussions, comprising more than three million individual posts collected from the subreddit r/ChangeMyView , we demonstrate that our modeling of debater's characteristics enhances the prediction of argument persuasiveness as well as of debaters' resistance to persuasion.", "Persuasion is a primary goal of argumentation (O'Keefe, 2006).", "It is often carried out in the form of a debate or discussion, where debaters argue to persuade others to take certain stances on controversial topics.", "Several studies have examined persuasiveness in debates by probing the main factors for establishing persuasion, particularly regarding the role of linguistic features of debaters' arguments (Zhang et al., 2016), the interaction between debaters (Tan et al., 2016), and the personal characteristics of debaters (Durmus and Cardie, 2018).", "While the impact of debaters' characteristics on persuasiveness has been observed in online debates, the exploitation of these characteristics for predicting persuasiveness has been done based on explicit characteristics-related information in users' profiles or on questionnaires.", "For example, Lukin et al. (2017a) performed a personality trait test for selected people and asked them for their stances on specific topics to estimate their beliefs.", "Also, Durmus and Cardie (2018) used the information in users' profiles in an online forum, where their stances on controversial topics are explicitly stated, as a proxy of their beliefs.", "Such a means of exploitation limits the applicability of predicting persuasiveness, as the characteristics of debaters are usually not explicitly available in online debates, and it is not practicable to survey every debater.", "The paper at hand studies how the characteristics of debaters can be modeled automatically and utilized successfully for predicting persuasiveness.", "To this end, we propose a new approach of various features that capture the beliefs, interests, and personality traits of debaters on the subreddit ChangeMyView based on the debaters' previous activity on the Reddit.com platform.", "We apply this approach to the tasks of predicting argument persuasiveness and predicting debater's resistance to persuasion.", "Our experiments show that incorporating debater characteristics improves the prediction effectiveness of the two tasks over previous approaches which rely primarily on linguistic features.", "Interestingly, personality traits alone were the most predictive feature for resistance to persuasion, outperforming the linguistic features of the post itself.", "1. A large-scale corpus of argumentative and general discussions mined from Reddit.com.", "1 2. Features that capture the beliefs, interests, and personality traits of debaters based on their posting history.", "3. A characteristics-based approach that tackles two persuasiveness tasks with improved effectiveness over previous approaches.", "2 1 The corpus can be found at webis.de/data and https://zenodo.org/record/3778298 2 To reproduce our experiments, the code is found here: https://github.com/webis-de/ACL-20 2 Related Work The prediction of argument persuasiveness has been investigated in several studies (e.g., (Tan et al., 2016), (Zhang et al., 2016), (Persing and Ng, 2017), and (Hidey and McKeown, 2018).", "To mitigate the lack of annotated data, Persing and Ng (2017) proposed a light supervision model for persuasiveness scoring by explicitly modeling errors that negatively impact the persuasiveness of an argument.", "Musi et al. (2018) built an annotated corpus of concessions in CMV discussions using expert annotations and automatic classification.", "They observed that concessions are equally distributed among persuasive and non-persuasive threads and that they do not play any significant role as a means of persuasion.", "Studying the effect of argument sequencing, Hidey and McKeown (2018) provided evidence that the order in which arguments are presented plays a crucial role in persuasion.", "Considering the importance of linguistic features, Luu et al. (2019) studied debater skill as it improves over time due to prolonged interaction with other debaters.", "Combining linguistic features such as length of turns, and co-occurrence of hedges and fighting words, they developed a strong estimator of debaters' persuasive skill over time.", "Apart from content-based features, modeling the audience is crucial for predicting persuasiveness.", "(Lukin et al., 2017a) studied the interaction of social media argument types with audience factors, to compare the belief change that results from social media dialogs to that from professionally curated monologic summaries.", "Participants were profiled for prior beliefs and personality typesneutral and balanced arguments were successful at changing the beliefs of all participants.", "In contrast, an entrenched audience was convinced by more emotional dialogs.", "(Durmus and Cardie, 2018) further explored the role of prior beliefs by predicting the success of debaters with explicitly stated religious and political ideologies, and found that readers were more likely to be convinced by a debater with the same ideology.", "(Longpre et al., 2019) examined linguistic features of debates together with audience features such as demographic information, prior beliefs, and debate platform behavior.", "They found that for a priori undecided users, audience features were prominent in predicting persuasiveness.", "For decided users, stylistic features of the argument were more effective.", "Closely related to our work, Durmus and Cardie (2019) explored the effects of debaters' language, their prior beliefs and traits, and social interactions with other users on the DDO (debate.org) platform.", "The social interaction features were crucial in predicting the success of a debater, and combining them with features capturing debaters' language performed best.", "DDO explicitly provides information on personal traits of debaters, including demographics such as gender, ethnicity, and user's beliefs.", "Our data source lacks this information, which increases the difficulty of modeling users.", "Recently, Guo et al. (2020) modeled the interplay of comments to study their cumulative influence on persuading the audience.", "They proposed a sequential model that captures the interplay as local and non-local dependencies and outperforms studies focusing only on lexical features.", "The ChangeMyView subreddit (CMV) has been exploited for argument persuasiveness in many studies.", "For example, (Tan et al., 2016), (Hidey and McKeown, 2018), and (Habernal et al., 2018) used CMV as a source of real-world persuasive discourse.", "In this paper, we address the two persuasiveness tasks that have been proposed by Tan et al. (2016):", "1. Predicting argument persuasiveness: given a debate topic and an argument regarding it, the task is to predict if the argument is persuasive, in terms of whether it is able to change the stance of an opponent.", "2. Predicting resistance to persuasion: given a controversial topic (with a specific stance towards it) written by a debater, the task is to identify whether the debater's stance is resistant.", "We use Reddit.com as a source of debates.", "This platform comprises a variety of user-generated content, organized within communities called subred-dits.", "The subreddit r/ChangeMyView (CMV) focuses on organized debates.", "As shown in Figure 1, contributors to CMV make an original post (OP) stating their stance on a debate topic of their choice.", "Other Reddit users may post opposing comments in response, to which the submitter of the OP may respond in turn, and award a delta to any comment that successfully changed their stance.", "The CMV setting allows deriving gold standard labels for the two studied persuasiveness tasks.", "In Original Post Title: Cars should be equipped with both angry and apologetic horns Comment #1 ...A standard and widely used method of conveying thanks and apology already exists: blinking your hazards a few times....", "Comment #2 ...You jump a big step here, because you don't explain why the traditional light wave/hand under mirror gesture isn't effective.", "I see it pretty much every time...", "Ah, really?", "Didn't know that!", "I give you delta!", "Figure 1: Exemplary excerpt of a CMV original post and two comments (i.e, arguments).", "specific, we can assume that the comments that receive deltas are persuasive compared to those which do not.", "Here, an individual comment, made by a Reddit user in response to a CMV post, can be regarded as an argument.", "This is a simplification, but it is reasonable considering the characteristics of CMV (Tan et al., 2016).", "Also, If the user who submits the OP gave a delta to any response, we can assume that their stance is malleable.", "Following the assumptions above, CMV has been crawled and the comments there have been labeled for persuasiveness, resulting in the CMV corpus of Tan et al. (2016), which covers the complete set of posts and comments until 2015.", "To extend this corpus, we collected all available CMV posts and comments from the foundation of the subreddit in 2005 until September 2017.", "Table 1 shows statistics for both corpora.", "To acquire debaters' posting history, which we employ in our approach (Section 4), we also collect all posts and comments across all of Reddit for each debater.", "The resulting extended corpus, Webis-CMV-20 , is made available to the research community.", "We develop features to capture the interests , prior beliefs , and personality traits of debaters, and compute the similarity between two debaters based on these features.", "We capture debater interests based on their activities across subreddits.", "We rely on the assumption that the number of posts a debater makes in a subreddit (such as r/politics or r/religion ) indicates their degree of interest in that topic.", "For instance, if a debater is interested in religious issues, it is likely that she posted to those subreddits which discuss CMV corpus Webis-CMV-20 Discussion trees 20,626 65,169 Discussion Nodes 1,260,266 3,449,917 Posts (OPs) 14,174 28,722 Unique authors 86,888 155,337 Table 1: Statistics of the corpus collected by Tan et al. (2016) as well as our own corpus.", "We thus represent each debater by an interest vector depicting their interests across all subreddits.", "To constrain the impact of highly popular subreddits like r/AskReddit or r/announcements , we adopt a weighting scheme similar to tf-idf, where a subreddit s is represented as the fraction of a debater's total posts made within subreddit s , weighted by the logarithm of the ratio of the number of unique authors that posted in r/ChangeMyView to the number of authors that posted in subreddit s .", "The resulting interest vectors are very sparse (there are around one million subreddits), and thus not well suited for debaters similarity calculation.", "We apply two compression steps: First, we use data on subreddit topics from Snoopsnoo 3 to group subreddits into 720 categories, each represented as the sum of the interest vector elements for its constituent subreddits.", "Second, we apply principal component analysis to the result, and retain only the first five principal components, resulting in a 5-dimensional interest vector for each debater.", "We assume that the totality of a debater's stances towards multiple topics is a good proxy for prior beliefs.", "To operationalize this assumption, we represent each debater by a belief vector , with each element representing the stance towards a particular topic.", "As topics, we consider the titles of Wikipedia articles: 4 across all Reddit posts by a given debater, we identify Wikipedia entities via entity linking, 5 compute the sentiment score 6 of sentences that mention entities, and assign this score as the stance of the debater towards this entity in the belief vector; entities mentioned in multiple contexts receive the median sentiment score.", "Previous studies on the role of personality traits in influence (Nguyen et al., 2011) and argument synthesis (El Baff et al., 2019) used a psychometric dictionary-based text analysis.", "A similar approach for extracting personality traits using an external service (IBM Personality Insights) was done by Shmueli-Scheuer et al. (2019).", "Overall, those studies showed increased effectiveness on persuasion detection by including personality trait features.", "Hence, we process debaters' posts to reveal their personality traits, in which we represent each debater by a traits vector containing the distribution of the words in the debater's posts across particular classes such as adventurous , genuine , self conscious , to name a few.", "To this end, we apply the widely used Linguistic Inquiry and Word Count (LIWC) tool (Pennebaker et al., 2015) to the first 1000 words extracted from all Reddit posts made by a debater in temporal order.", "7 For factors such as the big five personality traits, LIWC reports both raw scores and percentiles.", "Based on preliminary experiments, we use the concatenation of both as the final traits vector.", "Given the debater feature vectors created as described above, we compute the similarity between a pair of debaters as the cosine similarity of the concatenation of their characteristic vectors.", "Figure 2 shows an example for computing the similarity between users based on their interest vectors.", "We evaluate our approach against the tasks described in Section", "3. The tasks are predicting ar-7 The LIWC API recommends input to be at least 300 words and up to 1000 words long, hence we exclude debaters with less than 300 words of posting activity.", "As a basis for our experiments, we use the CMV corpus of Tan et al. (2016) and our extended corpus Webis-CMV-20 (see Section 3).", "Since our approach depends on the activity history in previous Reddit.com posts for modeling debaters characteristics, we retain only those original posts and associated discussions where sufficient prior posting history, at least on the author of the original post, is available.", "For predicting argument persuasiveness , we consider only the discussions where at least one delta was awarded.", "For each comment that received a delta, we sample another comment of similar length from the same discussion that did not, if exists.", "This procedure yields a total of 16,050 samples comprising the original post, the comment, the respective author characteristics, and the binary target of whether a delta was awarded, out of which 8,247 are positive (awarded a delta) and 7,803 negative; 3,554 of all samples are held out for testing.", "For predicting resistance to persuasion , we sample 3,186 submissions whose author awarded at least one delta, and 4,935 submissions where no delta was awarded.", "Each sample comprises only the original post with its author characteristics, along with the binary target of whether a delta was awarded.", "We hold out 1,330 samples for testing.", "Table 2 shows statistics of the training and holdout datasets for the two studied tasks.", "For our experiments, we re-implement the most powerful features proposed by Tan et al. (2016), including BOW, several interplay features (e.g., the number of common words between the original post and the comment), and various style features (e.g., the intensity of emotion and concreteness).", "We compare these features to the features proposed in Section 4 that model debater characteristics.", "Following related work, we employ a logistic regression classifier with L1 regularization.", "We fine-tune the parameters via 5-fold cross validation on the training sets.", "While incorporating debater characteristics in a persuasiveness prediction model leads to small improvements in our experiments, we find that predicting debater's resistance using only personality traits outperforms all other feature sets.", "Predicting argument persuasiveness Features based only on the content of the post pair have proven quite effective at predicting persuasiveness in previous workTan et al. (2016) found the comment word count by itself to achieve significantly better than chance accuracy.", "Due to our sampling strategy which is biased towards negative samples with similar length to the positive ones, this feature performs considerably worse on our new corpus.", "We further explore how our features for modeling debater characteristics, when combined with linguistic features, improve the classification accuracy.", "As can be seen in Table 3, personality traits, interests, and beliefs slightly outperform linguistic features.", "On the CMV corpus , a model using only trait features is most effective, achieving 61.8% accuracy (AUC 0.66), while linguistic features only achieve an accuracy of 60.4% (AUC 0.61).", "Table 4 shows the results for the debater's resistance to persuasion task on both corpora.", "As reported by Tan et al. (2016), predicting debater's resistance to persuasion using only linguistic features is a very challenging task (they showed that human annotators performed at no better than chance level).", "Our personality trait features vastly outperform merely linguistic features across both corpora.", "However, the individual traits themselves show a weak association with resistance to persuasion; for instance, the Pearson correlation coeffi-cients are very small for the big five personality traits agreeableness (0.075), conscientiousness (-0.037), extraversion (-0.046), neuroticism (-0.067), and openness (0.019), suggesting that only a complex interplay of these characteristics is predictive of resistance to persuasion.", "This paper proposes a new approach for modeling the personal characteristics of debaters including interests, prior beliefs, and personality traits for predicting both argument persuasiveness and debaters' resistance to persuasion.", "We hypothesize that these characteristics can be induced automatically from the history of debaters' activity such as their earlier texts.", "Based on this hypothesis, we develop a set of various features to capture debaters characteristics using the Reddit.com platform.", "Applying these features on persuasiveness corpora derived from the subreddit r/ChangeMyView, we accomplish a fair improvement on the effectiveness of tackling the studied persuasiveness tasks, particularly in predicting the debaters' resistance to persuasion.", "In the future, we plan to consider the ethos mode of persuasion by exploring how debaters strengthen their credibility in debates." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "abstain", "objective", "objective", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "method", "objective" ]
[ "The explicit use of syntactic information has been proved useful for neural machine translation (NMT).", "However, previous methods resort to either tree-structured neural networks or long linearized sequences, both of which are inefficient.", "Neural syntactic distance (NSD) enables us to represent a constituent tree using a sequence whose length is identical to the number of words in the sentence.", "NSD has been used for constituent parsing, but not in machine translation.", "We propose five strategies to improve NMT with NSD.", "Experiments show that it is not trivial to improve NMT with NSD; however, the proposed strategies are shown to improve translation performance of the baseline model (+2.1 (EnJa), +1.3 (Ja En), +1.2 (EnCh), and +1.0 (ChEn) BLEU).", "In recent years, neural machine translation (NMT) has been developing rapidly and has become the de facto approach for machine translation.", "To improve the performance of the conventional NMT models (Sutskever et al., 2014; Bahdanau et al., 2014), one effective approach is to incorporate syntactic information into the encoder and/or decoder of the baseline model.", "Based on how the syntactic information is represented, there are two categories of syntactic NMT methods: (1) those that use tree-structured neural networks (NNs) to represent syntax structures (Eriguchi et al., 2016; Hashimoto and Tsuruoka, 2017), and (2) those that use linear-structured NNs to represent linearized syntax structures (Li et al., 2017; Ma et al., 2017, 2018).", "For the first category, there is a direct corresponding relationship between the syntactic structure and the NN structure, but the complexity of NN structures usually makes training inCorresponding author efficient.", "In contrast, for the second category, syntactic structures are linearized and represented using linear-structured recurrent neural networks (RNNs), but the linearized sequence can generally be quite long and therefore training efficiency is still a problem.", "Although using a shorter sequence may improve the efficiency, some syntactic information is lost.", "We propose a method of using syntactic information in NMT that overcomes the disadvantages of both methods.", "The basis of our method is the neural syntactic distance (NSD), a recently proposed concept used for constituent parsing (Shen et al., 2018; Gomez-Rodrguez and Vilares, 2018).", "NSD makes it possible to represent a constituent tree as a sequence whose length is identical to the number of words in the sentence (almost) without losing syntactic information.", "However, there are no previous studies that use NSD in NMT.", "Moreover, as demonstrated by our experiments, using NSD in NMT is far from straightforward, so we propose five strategies and verify the effects empirically.", "The strategies are summarized below.", "Extend NSD to dependency trees, which is inspired by the dependency language model (Shen et al., 2010).", "Use NSDs as input sequences 1 , where an NSD is regarded as a linguistic input feature (Sennrich and Haddow, 2016).", "Use NSDs as output sequences, where the NMT and prediction of the NSD are simultaneously trained through multi-task learning (Firat et al., 2016).", "Use NSD as positional encoding (PE), which is a syntactic extension of the PE of the Transformer (Vaswani et al., 2017).", "1 Throughout this paper, input means the input of an encoder or a decoder rather than the input of the NMT model (i.e., only source sentences), and output is similar.", "The NSD was firstly proposed by Shen et al. (2018).", "This is the first method of linearizing a constituent tree with a sequence of length n , without loss of information, where n is the number of words in the sentence.", "Formally, given the sentence w = ( w 1 , . . . , w n ) , for any pairs of contiguous words ( w i , w i +1 ) , we can define an NSD d ( w i ) , 2 where i [1 , n 1] .", "In Shen et al. (2018), the NSD d S ( w i ) is defined as the height of the lowest common ancestor (LCA) of the words.", "3 Subsequently, in Gomez-Rodrguez and Vilares (2018), the NSD d G ( w i ) was defined as the number of the common ancestors of the words.", "To make the definition complete, we define d ( w n ) as follows: 4 d S ( w n ) = H, d G ( w n ) = 0 , (1) where H is the height of the constituent tree.", "It is easy to prove that d S ( w i ) + d G ( w i ) = H, i [1 , n ] .", "We call d S and d G the absolute NSD.", "Furthermore, Gomez-Rodrguez and Vilares (2018) define the relative NSD as follows: d R ( w i ) = (cid:40) d G ( w 1 ) , i = 1 , d G ( w i ) d G ( w i 1 ) , i [2 , n ] .", "Figure 1 illustrates these NSDs.", "It is easy to see the one-to-one correspondence relationship between the constituent tree and the (absolute or relative) NSDs.", "The effectiveness of all different NSDs has been proven on constituent parsing.", "However, there has been no attempt to use NSD in machine translation.", "4 d S ( w n ) and d G ( w n ) are undefined in both of the original papers.", "We give the definitions here to enable the use of NSD in NMT later.", "There are many previous studies on using dependency trees to improve NMT (Nguyen Le et al., 2017; Wu et al., 2017).", "Therefore, we extend NSD to dependency trees.", "Formally, the dependency NSD between two nodes is defined as follows: d D ( w i ) = i h ( i ) , (4) where h ( i ) is the index of the head of w i , and we let the index of root be 0 .", "Note that d D ( w i ) can be either positive or negative, representing the directional information.", "Figure 2 gives an example.", "It is easy to see that for w = ( w 1 , . . . , w n ) , the lengths of d S , d G , d R and d D are all n .", "Denoting the NSD sequence as d = ( d 1 , . . . , d n ) , we can see that d i Z , i [1 , n ] , so we can obtain a sequence of embedding vectors e d = ( e d 1 , . . . , e dn ) as follows: e di = E d [ d di + (max( d ) min( d ) + 1)] .", "We call E d the distance embedding matrix and call e d the syntactic embedding sequence.", "Note that d can be the NSD on either the source side or the target side, so there are two possible E d , which are denoted as E sd and E td , respectively.", "The embeddings are calculated as follows: x si = f emb ( E sw [ w si ] , e dsi ) , (6) x ti = f emb ( E tw [ w ti ] , e dti ) , (7) where e dsi and e dti are defined in Eq.", "5 on the source side and target side, respectively, and E sw and E tw are the word embedding matrices on both sides, respectively.", "Inspired by Sennrich and Had-dow (2016), function f emb is used to combine two vectors.", "This function has many different options, such as: f (cid:107) emb ( x, e ) = x (cid:107) e, (8) f + emb ( x, e ) = x + e, (9) f Wbemb ( x, e ) = W f ( x (cid:107) e ) + b f , (10) where x, e, b f R d and W f R d 2 d .", "The operator (cid:107) is the concatenation of two vectors.", "When NSD is used as the input sequence on the target side, there is one problem: e dt is unknown during testing.", "For this case, we use NSDs for both the input and output sequences, let the decoder predict NSD on-the-fly using the strategy introduced in Section 3.3, and use the predicted NSD to calculate e dt .", "An NSD can be used to form the output sequence to improve NMT using the idea of multi-task learning.", "Specifically, we train the model to predict the NSD sequence.", "When NSD is used as the output sequence of the encoder , we minimize the distance (e.g., cross entropy L entdist , see Section 3.5 for details) between the predicted and the golden NSD sequences.", "When NSD is used as the output sequence of the decoder , besides minimizing the distance, we use the predicted NSD as the input of the next time step.", "Denote the hidden vector as h = ( h 1 , . . . , h n ) .", "For the encoder, h i = h si and n = n s , while for the decoder, h i = h ti and n is the current time step of decoding.", "Then, we can obtain a sequence of predicted syntactic distance d = ( d 1 , . . . , d n ) , which is calculated as follows: p ( d i | h i ) = softmax( W d h i + b d ) , (11) where W d and b d are parameters to be learned.", "By minimizing the distance between d i and d i , NSD can be used to enhance NMT.", "PE is used by the Transformer (Vaswani et al., 2017) to encode the positions of words.", "Formally, it is defined as follows: x (cid:48) i = x i + P E ( i ) , (12) P E ( i ) 2 k = sin( i/ 10000 2 k/d ) , (13) P E ( i ) 2 k +1 = cos( i/ 10000 2 k/d ) , (14) where x i can be either x si or x ti , and d is the dimension of the embedding vector.", "Similarly, we define syntactic PE as follows: P E ( i ) 2 k = sin (cid:16) i + max( d ) min( d ) SPE 2 k/d 2 (cid:17) , (15) P E ( i ) 2 k +1 = cos (cid:16) i + max( d ) min( d ) SPE 2 k/d 2 (cid:17) , (16) where SPE is a hyperparameter to be tuned.", "In this way, the periods of these two functions vary from 1 to SPE .", "We define syntactic PE in this way because (1) according to a quantitative analysis of the experimental datasets, we found that the ranges of possible values are quite different between NSD and word positions, so we tuned SPE instead of fixed it to 10000 as in Eqs.", "13 and 14, and (2) d i may be negative, so we adjust it to be positive.", "Instead of using conventional cross-entropy loss function during training, we use the following loss function to make the NMT model learn NSD better: entdist", "L = LNMT + L dist + L .", "(17)", "The first item is the cross-entropy loss of the NMT model, which is LNMT = (cid:88) (cid:104) w s , w t (cid:105)D log p ( w t | w s ) , (18) where D is the training dataset.", "The second item is the distance-aware loss, which is inspired by Shen et al. (2018) and is as follows: L dist = (cid:88) (cid:104) w s , w t (cid:105)D ( L sdist ( w s ) + L tdist ( w t )) , L sdist ( w s ) = n s (cid:88) i =1 ( d i d i ) 2 + (cid:88) i,j>i [1 sign( d i d j )( d i d j )] + , (19) and L tdist can be defined similarly.", "The third item is the cross-entropy loss for NSD, which is as follows: L entdist = (cid:88) (cid:104) w s , w t (cid:105)D ( L ent ( s ) dist ( w s ) + L ent ( t ) dist ( w t )) , L ent ( s ) dist ( w s ) = (cid:88) d i d s p ( d i | h i ) log p ( d i | h i ) , (20) and L ent ( t ) dist can be defined similarly.", "We experimented on two corpora: (1) ASPEC (Nakazawa et al., 2016), using the top 100 K sentence pairs for training EnJa models and top 1 M sentence pairs for training JaEn models, and (2) LDC, 5 which contains about 1.2M sentence pairs, for training EnCh and ChEn models.", "To tackling the problem of memory consumption, sentences longer than 150 were filtered out, so that models can be trained successfully.", "Chinese sentences were segmented by the Stanford segmentation tool.", "6 For Japanese sentences, we followed the preprocessing steps recommended in WAT 2017.", "7 The test set is a concatenation of NIST MT 2003, 2004, and 2005.", "Constituent trees are generated by the parser of Kitaev and Klein (2018) 8 , and dependency trees are generated by the parser of Dyer et al. (2015) 9 .", "Note that although we only used syntactic information of English in our experiments, our method is also applicable to other languages.", "We implemented our method on OpenNMT 10 (Klein et al., 2017), and used the Transformer as our baseline.", "As far as we know, there are no previous studies on using syntactic informations in the Transformer.", "The vocabulary sizes for all languages are 50 , 000 .", "Both the encoder and decoder have 6 layers.", "The dimensions of hidden vectors and word embeddings are 512 .", "The multi-head attention has 5 LDC2002E18, LDC2003E07, LDC2003E14, Hansards portion of LDC2004T07, LDC2004T08, and LDC2005T06.", "6 https://nlp.stanford.edu/software/ stanford-segmenter-2017-06-09.zip 7 http://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2017/baseline/dataPreparationJE.html 8 https://github.com/nikitakit/ self-attentive-parser 9 https://github.com/clab/lstm-parser 10 http://opennmt.net 8 heads, and the dropout probability is 0 .", "1 (Srivas-tava et al., 2014).", "The number of training epochs was fixed to 50 , and we used the model which performs the best on the development set for testing.", "As for optimization, we used the Adam optimizer (Kingma and Ba, 2014), with 1 = 0 .", "9 , 2 = 0 .", "998 , and (cid:15) = 10 9 .", "Warmup and decay strategy for learning rate of Vaswani et al. (2017) are also used, with 8 , 000 warmup steps.", "We also used the label smoothing strategy (Szegedy et al., 2016) with (cid:15) ls = 0 .", "1 .", "Table 1 compares the effects of the strategies.", "We evaluate the proposed strategies using character-level BLEU (Papineni et al., 2002) for Chinese and Japanese, and case-insensitive BLEU for English.", "Comparison of different NSDs.", "The first five rows of Table 1 compare the results of using different NSDs.", "When NSD was used at the source side (EnJa/EnCh), all kinds of NSDs improved translation performance.", "This indicates that NSD can be regarded as a useful linguistic feature to improve NMT.", "In contrast, when NSD was used at the target side (JaEn/ChEn), d S and d G hurt the performance.", "This is because the values of d S and d G are volatile.", "A tiny change of syntactic structure often causes a big change of d S and d G .", "Since the model has to predict the NSD during decoding, once there is one error, the subsequent predictions will be heavily influenced.", "The use of d R and d D remedies this problem.", "Furthermore, the effects of d S and d G are similar, because they are equivalent in nature (refer to Eq. 2).", "NSD as PE.", "Rows 5 to 8 of Table 1 evaluate the use of dependency NSD ( d D ) as syntactic PE.", "Note that for all the experiments, we used not only the syntactic PE but the conventional PE.", "Experiment results show that this strategy is indeed useful.", "When the dominators of Eqs.", "15 and 16, SPE , were set to 10 4 , there was no improvement.", "When they were set to 40 , the improvement was remarkable.", "This indicates that our design of syntactic PE is reasonable.", "NSD as input/output and source/target sequences.", "Rows 8 to 12 of Table 1 are the results of using dependency NSD (i.e., d D ) as the input and/or output sequences on both sides.", "First, for the choice of f emb , we can see that f (cid:107) emb and f + emb are similar, while f Wbemb yields better performance.", "This is because the model has to learn W f and Type I/O SPE Loss EnJa JaEn EnCh ChEn 1 N/A N/A No Eq.", "b f , which increases the model capacity.", "Second, performance improved for using NSDs both as input and output sequences, and combining both obtained further improvement.", "Third, NSDs improved the performance both on the source and the target sides.", "All these results indicate the robustness of NSDs.", "Effects of distance-aware training.", "The last three rows compare the different effects of the items in the loss function.", "When only LNMT are used, the performance is extremely poor.", "This is within expectations, because with only LNMT , weights related to NSDs are kept to the initial values and were not updated, and hence detrimental to learning.", "Adding L entdist improves the results sig-nificantly, but the improvement is lower than that of L dist .", "This is because training with L entdist treats different values of NSDs equally, while L dist penalizes larger differences between the predicted NSD and the golden NSD more severely.", "We proposed five strategies to improve NMT with NSD.", "We found relative NSDs and dependency NSDs are able to improve the performance consistently, while absolute NSDs hurt the performance for some cases.", "The improvement obtained by using NSDs is general in that NSDs can be used at both the source side and target side, both as input sequences and output sequences.", "Using NSDs as syntactic PE is also useful, and training with a distance-aware loss function is quite important.", "We are grateful to the anonymous reviewers for their insightful comments and suggestions.", "Akihiro Tamura is supported by JSPS KAK-ENHI Grant Number JP18K18110.", "Tiejun Zhao is supported by the National Key Research and Development Program of China via grant 2017YFB1002102." ]
[ "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "other", "other", "other" ]
[ "We address the problem of extractive question answering using document-level distant supervision, pairing questions and relevant documents with answer strings.", "We compare previously used probability space and distant supervision assumptions (assumptions on the correspondence between the weak answer string labels and possible answer mention spans).", "We show that these assumptions interact, and that different configurations provide complementary benefits.", "We demonstrate that a multi-objective model can efficiently combine the advantages of multiple assumptions and outperform the best individual formulation.", "Our approach outperforms previous state-of-the-art models by 4.3 points in F1 on TriviaQA-Wiki and 1.7 points in Rouge-L on NarrativeQA summaries.", "1 1 Introduction Distant supervision assumptions have enabled the creation of large-scale datasets that can be used to train fine-grained extractive short answer question answering (QA) systems.", "One example is TriviaQA (Joshi et al., 2017).", "There the authors utilized a pre-existing set of Trivia question-answer string pairs and coupled them with relevant documents, such that, with high likelihood, the documents support answering the questions (see Fig. 1 for an illustration).", "Another example is the NarrativeQA dataset (Kocisky et al., 2018), where crowd-sourced abstractive answer strings were used to weakly supervise answer mentions in the text of movie scripts or their summaries.", "In this work, we focus on the setting of document-level extractive QA, where distant supervision is specified as a set A of answer strings for an input question-document pair.", "1 Based on the TriviaQA-Wiki leaderboard, our approach was the SOTA when this work was submitted on Dec 04, 2019.", "Question: How is Joan Molinsky better known?", "Answer: Joan Rivers : { Joan Rivers, Diary of a Mad Diva } P1: Joan Alexandra Molinsky, known professionally as Joan Rivers, was an American comedian, actress, writer, producer, and television host.", "Joan Rivers was strongly influenced by Lenny Bruce.", "P2: She received a Grammy Award for Best Spoken Word Album for her book, Diary of a Mad Diva .", "P3: Joan Alexandra Molinsky was born on June 8, 1933, in Brooklyn, New York.", "Before entering show business, she chose Joan Rivers as her stage name.", "Question: Where do the dancers purify themselves?", "TriviaQA NarrativeQA Figure 1: TriviaQA and NarrativeQA examples.", "Depending on the data generation process, the properties of the resulting supervision from the sets A may differ.", "For example, the provided answer sets in TriviaQA include aliases of original trivia question answers, aimed at capturing semantically equivalent answers but liable to introducing semantic drift.", "In Fig. 1, the possible answer string Diary of a Mad Diva is related to Joan Rivers, but is not a valid answer for the given question.", "On the other hand, the sets of answer strings in NarrativeQA are mostly valid since they have high overlap with human-generated answers for the given question/document pair.", "As shown in Fig. 1, in the spring at mount helicon and mount helicon are both valid answers with relevant mentions.", "In this case, the annotators chose answers that appear verbatim in the text but in the more general case, noise may come from partial phrases and irrelevant mentions.", "While distant supervision reduces the annotation cost, increased coverage often comes with increased noise (e.g., expanding entity answer strings with aliases improves coverage but also increases noise).", "Even for fixed document-level distant supervision in the form of a set of answers A , different interpretations of the partial supervision lead to different points in the coverage/noise space and their relative performance is not well understood.", "This work systematically studies methods for learning and inference with document-level distantly supervised extractive QA models.", "Using a BERT (Devlin et al., 2019) joint question-passage encoder, we study the compound impact of: Probability space ( 2): ways to define the model's probability space based on independent paragraphs or whole documents.", "Distant supervision assumptions ( 3): ways to translate the supervision from possible strings A to possible locations of answer mentions in the document.", "Optimization and inference ( 4): ways to define corresponding training objectives (e.g. Hard EM as in Min et al. (2019) vs. Maximum Marginal Likelihood) and make answer string predictions during inference (Viterbi or marginal inference).", "We show that the choice of probability space puts constraints on the distant supervision assumptions that can be captured, and that all three choices interact, leading to large differences in performance.", "Specifically, we provide a framework for understanding different distant supervision assumptions and the corresponding trade-off among the coverage, quality and strength of distant supervision signal.", "The best configuration depends on the properties of the possible annotations A and is thus data-dependent.", "Compared with recent work also using BERT representations, our study show that the model with most suitable probabilistic treatment achieves large improvements of 4.6 F1 on TriviaQA and 1.7 Rouge-L on NarrativeQA respectively.", "Additionally, we design an efficient multi-loss objective that can combine the benefits of different formulations, leading to significant improvements in accuracy, surpassing the best previously reported results on the two studied BERT (Joan Rivers| ) Begin and End Probabilities ( , ) (Joan Rivers| ) Span Probabilities( ) String Probabilities ( ) (Joan Rivers) (Diary of a Mad Diva) Contextualized Representation BERT Figure 2: The document-level QA model as used for test-time inference.", "tasks.", "Results are further strengthened by transfer learning from fully labeled short-answer extraction data in SQuAD 2.0 (Rajpurkar et al., 2018), leading to a final state-of-the-art performance of 76.3 F1 on TriviaQA-Wiki and 62.9 on the NarrativeQA summaries task.", "2 2 Probability Space Here, we first formalize both paragraph-level and document-level models, which have been previously used for document-level extractive QA.", "Typically, paragraph-level models consider each paragraph in the document independently, whereas document models integrate some dependencies among paragraphs.", "To define the model, we need to specify the probability space, consisting of a set of possible outcomes and a way to assign probabilities to individual outcomes.", "For extractive QA, the probability space outcomes consist of token positions of answer mention spans.", "The overall model architecture is shown in Fig. 2.", "We use BERT (Devlin et al., 2019) to derive representations of document tokens.", "As is standard in state-of-the-art extractive QA models (Devlin et al., 2019; Lee et al., 2019; Min et al., 2019), the BERT model is used to encode a pair of a given question with one paragraph from a given document into neural text representations.", "These representations are then used to 2 The code is available at https://github.com/ hao-cheng/ds_doc_qa define scores/probabilities of possible answer begin and end positions, which are in turn used to define probabilities over possible answer spans .", "Then the answer string probabilities can be defined as the aggregation over all possible answer spans/mentions.", "In the following, we show that paragraph-level and document-level models differ only in the space of possible outcomes and the way of computing answer span probabilities from answer position begin and end scores.", "Scoring answer begin and end positions Given a question q and a document d consisting of K paragraphs p 1 , . . . , p K , the BERT encoder produces contextualized representations for each question-paragraph pair ( q, p k ) .", "Specifically, for each token position i k in p k , the final hidden vector h ( i,k ) R d is used as the contextualized token embedding, where d is the vector dimension.", "The span-begin score is computed as s b ( i k ) = w Tb h ( i,k ) using a weight vector w b R d .", "The span-end score s e ( j k ) is defined in the same way.", "The probabilities for a start position i k and an end position j k are P b ( i k ) = exp( s b ( i k )) Z b , (1) P e ( j k ) = exp( s e ( j k )) Z e , (2) where Z b , Z e are normalizing factors, depending on the probability space definition (detailed be-low).", "The probability of an answer span from i k to j k is defined as P s ( i k , j k ) = P b ( i k ) P e ( j k ) .", "The partition functions Z b and Z e depend on whether we use a paragraph-level or document-level probability space.", "Paragraph-level model In paragraph level models, we assume that for a given question against a document d , each of its paragraphs p 1 , . . . , p K independently selects a pair of answer positions ( i k , j k ) , which are the begin and end of the answer from paragraph p k .", "In the case that p k does not support answering the question q , special NULL positions are selected (follow-ing the SQuAD 2.0 BERT implementation 3 ).", "Thus, the set of possible outcomes in the paragraph-level probability space is the set of lists of begin/end position pairs, one from each paragraph: { [( i 1 , j 1 ) , . . . , ( i K , j K )] } , where i k 3 https://github.com/google-research/bert and j k range over positions in the respective paragraphs.", "The answer positions in different paragraphs are independent, and the probability of each para-graph's answer begin and end is computed by normalizing over all possible positions in that paragraph, i.e., Z kb = (cid:88) i I k { NULL } exp( s b ( i )) , (3) Z ke = (cid:88) j I k { NULL } exp( s e ( j )) , (4) where I k is the set of all positions in the paragraph p k .", "The probability of an answer begin at i k is P b ( i k ) = exp( s b ( i k )) /Z bk and the probability of an end at j k is defined analogously.", "The probability of a possible answer position assignment for the document d is then defined as P ([( i 1 , j 1 ) , . . . , ( i K , j K )]) = (cid:81) k P b ( i k ) P e ( j k ) .", "As we can see from the above definition, due to the independence assumption, models using paragraph-level normalization do not learn to directly calibrate candidate answers from different paragraphs against each other.", "Document-level model In document-level models, we assume that for a given question against document d , a single answer span is selected (as opposed to one for each paragraph in the paragraph-level models).", "4 Here, the possible positions in all paragraphs are a part of a joint probability space and directly compete against each other.", "In this case, is the set of token spans { ( i, j ) } , where i and j are the begin and end positions of the selected answer.", "The normalizing factors are therefore aggregated over all paragraphs, i.e., Z b = K (cid:88) k =1 (cid:88) i I k exp( s b ( i )) , (5) Z e = K (cid:88) k =1 (cid:88) j I k exp( s e ( j )) .", "(6) Compared with (3) and (4), since there is always a valid answer in the document for the tasks studied here, NULL is not necessary for document-level models and thus can be excluded from the 4 In this paper, we focus on datasets where the document is known to contain a valid answer.", "It is straightforward to remove this assumption and consider document-level NULL for future work.", "inner summation of (5) and (6).", "The probability of a possible outcome, i.e. an answer span, is P ( i, j ) = exp( s b ( i ) + s e ( j )) / ( Z b Z e ) .", "There are multiple ways to interpret the distant supervision signal from A as possible outcomes in our paragraph-level and document-level probability spaces, leading to corresponding training loss functions.", "Although several different paragraph-level and document-level losses (Chen et al., 2017; Kadlec et al., 2016; Clark and Gardner, 2018; Lin et al., 2018; Min et al., 2019) have been studied in the literature, we want to point out that when interpreting the distant supervision signal, there is a tradeoff among multiple desiderata: Coverage : maximize the number of instances of relevant answer spans, which we can use to provide positive examples to our model.", "Quality : maximize the quality of annotations by minimizing noise from irrelevant answer strings or mentions.", "Strength : maximize the strength of the signal by reducing uncertainty and pointing the model more directly at correct answer mentions.", "We introduce three assumptions ( H1, H2, H3 ) for how the distant supervision signal should be interpreted, which lead to different tradeoffs among the desiderata above (see Table 1).", "We begin with setting up additional useful notation.", "Given a document-question pair ( d, q ) and a set of answer strings A , we define the set of A -consistent token spans YA in d as follows: for each paragraph p k , span ( i k , j k ) Y k A if and only if the string spanning these positions in the paragraph is in A .", "For paragraph-level models, if for paragraph p k the set Y k A is empty, we redefine Y k A to be { NULL } .", "Similarly, we define the set of A consistent begin positions Y kb, A as the start positions of consistent spans: Y kb, A = ( i,j ) Y k A { i } .", "Y ke, A for A -consistent end positions is defined analogously.", "In addition, we term an answer span ( i, j ) correct for question q , if its corresponding answer string is a correct answer to q , and the context of the specific mention of that answer string from positions i to j entails this answer.", "Similarly, we term an answer begin/end position correct if there exists a correct answer span starting/ending at that position.", "H1: All A -consistent answer spans are correct.", "While this assumption is evidently often incorrect (low on the quality dimension (cid:38) ), especially for TriviaQA, as seen from Fig. 1, it provides a large number of positive examples and a strong supervision signal (high on coverage (cid:37) and strength (cid:37) ).", "We include this in our study for completeness.", "H1 translates differently into possible outcomes for corresponding models depending on the probability space (paragraph or document).", "Paragraph-level models select multiple answer spans, one for each paragraph, to form a possible outcome.", "Thus, multiple A -consistent answer spans can occur in a single outcome, as long as they are in different paragraphs.", "For multiple A -consistent answer spans in the same paragraph, these can be seen as mentions that can be selected with equal probability (e.g., by different annotators).", "Document-level models select a single answer span in the document and therefore multiple A -consistent answer spans can be seen as occurring in separate annotation events.", "Table 2 shows in row one the log-probability of outcomes consistent with H1 .", "H2: Every positive paragraph has a correct answer in its A -consistent set.", "Under this assumption, each paragraph with a non-empty set of A consistent spans (termed a positive paragraph) has a correct answer.", "As we can see from the TriviaQA example in Fig. 1, this assumption is correct for the first and third paragraph, but not the second one, as it only contains a mention of a noisy answer alias.", "This assumption has medium coverage ( ), as it generates positive examples from multiple paragraphs but does not allow multiple positive mentions in the same paragraph.", "It also decreases noise (higher quality ) (e.g. does not claim that all the mentions of Joan Rivers in the first paragraph support answering the question).", "The strength of the supervision signal is weakened ( ) relative to H1 , as now the model needs to figure out which of the multiple A -consistent mentions in each paragraph is correct.", "H2 has two variations: correct span , assuming Span-Based Position-Based H1 (cid:80) k K (cid:80) ( i k ,j k ) Y k A log P s ( i k , j k ) (cid:80) k K (cid:80) i k Y kb, A log P b ( i k ) + (cid:80) k K (cid:80) j k Y ke, A log P e ( j k ) H2 (cid:80) k K log ( i k ,j k ) Y k AP s ( i k , j k ) (cid:80) k K log i k Y kb, AP b ( i k ) + (cid:80) k K log j k Y ke, AP e ( j k ) H3 log k K ( i k ,j k ) Y k AP s ( i k , j k ) log k K i k Y kb, AP b ( i k ) + log k K j k Y ke, AP e ( j k ) Table 2: Objective functions for a document-question pair ( d, q ) under different distant supervision assumptions.", "that one of the answer spans ( i k , j k ) in Y k A is correct, and correct position , assuming that the paragraph has a correct answer begin position from Y kb, A and a correct answer end position from Y ke, A , but its selected answer span may not necessarily belong to Y k A .", "For example, if A contains { abcd , bc } , then abc would have correct begin and end, but not be a correct span.", "It does not make sense for modeling to assume the paragraph has correct begin and end positions instead of a correct answer span (i.e., we don't really want to get inconsistent answers like abc above), but given that our probabilistic model assumes independence of begin and end answer positions, it may not be able to learn well with span-level weak supervision.", "Some prior work (Clark and Gardner, 2018) uses an H2 position-based distant supervision assumption with a pair-paragraph model akin to our document-level ones.", "Lin et al. (2018) use an H2 span-based distant supervision assumption.", "The impact of position vs. span-based modeling of the distant supervision is not well understood.", "As we will see in the experiments, for the majority of settings, position-based weak supervision is more effective than span-based for our model.", "For paragraph-level and document-level models, H2 corresponds differently to possible outcomes.", "For paragraph models, one outcome can select answer spans in all positive paragraphs and NULL in negative ones.", "For document-level models, we view answers in different paragraphs as outcomes of multiple draws from the distribution.", "The identity of the particular correct span or be-gin/end position is unknown, but we can compute the probability of the event comprising the consistent outcomes.", "Table 2 shows the log-probability of the outcomes consistent with H2 in row two (right for span-based and left for position-based interpretation, when plugging in (cid:80) for ).", "begin/end positions), but not every positive paragraph needs to have one.", "It further improves supervision quality ( (cid:37) ), because for example, it allows the model to filter out the noise in paragraph two in Fig. 1.", "Since the model is given a choice of any of the A -consistent mentions, it has the capability to assign zero probability mass on the supervision-consistent mentions in that paragraph.", "On the other hand, H3 has lower coverage ( (cid:38) ) than H1 and H2 , because it provides a single positive example for the whole document, rather than one for each positive paragraph.", "It also reduces the strength of the supervision signal ( (cid:38) ), as the model now needs to figure out which mention to select from the larger document-level set YA .", "Note that we can only use H3 coupled with a document-level model, because a paragraph-level model cannot directly tradeoff answers from different paragraphs against each other, to select a single answer span from the document.", "As with the other distant supervision hypotheses, span-based and position-based definitions of the possible consistent outcomes can be formulated.", "The log-probabilities of these events are defined in row three of Table 2, when using (cid:80) for .", "H3 was used by Kadlec et al. (2016) for cloze-style distantly supervised QA with recurrent neural network models.", "The probability-space (paragraph vs. document-level) and the distant supervision assumption ( H1 , H2 , and H3 , each position or span-based) together define our interpretation of the distant supervision signal resulting in definitions of probability space outcomes consistent with the supervision.", "Next, we define corresponding optimization objectives to train a model based on this supervision and describe the inference methods to make predictions with a trained model.", "For each distant supervision hypothesis, we maximize either the marginal log-likelihood of A consistent", "consistent outcomes (MML) or the log-likelihood of the most likely outcome (HardEM).", "The latter was found effective for weakly supervised tasks including QA and semantic parsing by Min et al. (2019).", "Table 2 shows the objective functions for all distant supervision assumptions, each comprising a pairing of a distant supervision hypothesis ( H1 , H2 , H3 ) and position-based vs. span-based interpretation.", "The probabilities are defined according to the assumed probability space (para-graph or document).", "In the table, K denotes the set of all paragraphs in the document, and Y k denotes the set of weakly labeled answer spans for the paragraph p k (which can be { NULL } for paragraph-level models).", "Note that span-based and position-based objective functions are equivalent for H1 because of the independence assumption, i.e. P s ( i k , j k ) = P b ( i k ) P e ( j k ) .", "Inference : Since the task is to predict an answer string rather than a particular mention for a given question, it is potentially beneficial to aggregate information across answer spans corresponding to the same string during inference.", "The score of a candidate answer string can be obtained as P a ( x ) = ( i,j ) X P s ( i, j ) , where X is the set of spans corresponding to the answer string x , and can be either (cid:80) or max .", "5 It is usually beneficial to match the training objective with the corresponding inference method, i.e. MML with marginal inference = (cid:80) , and HardEM with max (Viterbi) inference = max .", "Min et al. (2019) showed HardEM optimization was useful when using an H2 span-level distant supervision assumption coupled with max inference, but it is unclear whether this trend holds when (cid:80) inference is useful or other distant supervision assumptions perform better.", "We therefore study exhaustive combinations of probability space, distant supervision assumption, and training and inference methods.", "Two datasets are used in this paper: TriviaQA (Joshi et al., 2017) in its Wikipedia formulation, and NarrativeQA (summaries setting) (Kocisky et al., 2018).", "Using the same preprocessing as 5 For inference with marginal ( (cid:80) ) scoring, we use an approximate scheme where we only aggregate probabilities of candidates strings generated from a 20-best list of begin/end answer positions for each paragraph.", "Clark and Gardner (2018) for TriviaQA-Wiki 6 , we only keep the top 8 ranked paragraphs up to 400 tokens for each document-question pair for both training and evaluation.", "Following Min et al. (2019), for NarrativeQA we define the possible answer string sets A using Rouge-L (Lin, 2004) similarity with crouwdsourced abstractive answer strings.", "We use identical data preprocessing and the evaluation script provided by the authors.", "In this work, we use the BERT-base model for text encoding and train our model with the default configuration as described in (Devlin et al., 2019), fine-tuning all parameters.", "We fine-tune for 3 epochs on TriviaQA and 2 epochs on NarrativeQA.", "Here we look at the cross product of optimization (HardEM vs MML) and inference ( Max vs Sum ) for all distant supervision assumptions that result in models with latent variables.", "We therefore exclude H1 and look at the other two hypotheses, H2 and H3 , each coupled with a span-based (Span) or position-based (Pos) formulation and a paragraph-level ( P ) or a document level ( D ) probability space.", "The method used in Min et al. (2019) corresponds to span-based H2 -P with HardEM training and Max inference.", "The results are shown in Fig. 3. First, we observe that inference with Sum leads to significantly better results on TriviaQA under H2 -P and H2 -D , and slight improvement under H3 D .", "On NarrativeQA, inference with Max is better.", "We attribute this to the fact that correct answers often have multiple relevant mentions for TriviaQA (also see 5.6), whereas for NarrativeQA this is rarely the case.", "Thus, inference with Sum in NarrativeQA could potentially boost the probability of irrelevant frequent strings.", "Consistent with (Min et al., 2019), we observe that span-based HardEM works better than span-based MML under H2 P , with a larger advantage on NarrativeQA than on TriviaQA.", "However, under H2 D and H3 D , span-based MML performs consistently better than span-based HardEM.", "For position-based objectives, MML is consistently better than HardEM (potentially because HardEM may decide to place its probability mass on begin-end position combinations that do not contain mentions of strings in A ).", "Finally, it can be ob-6 https://github.com/allenai/document-qa 0.620.640.660.68 0.70.720.740.76 H a r d EM-S p a n H a r d EM-P o s MML-S p a n MML-P o s H a r d EM-S p a n H a r d EM-P o s MML-S p a n MML-P o s H a r d EM-S p a n H a r d EM-P o s MML-S p a n MML-P o s H2-P H2-D H3-D Max Sum", "served that under each distant supervision hypoth-esis/probability space combination, the position-based MML is always the best among the four objectives.", "Position-based objectives may perform better due to the independence assumptions for be-gin/end positions of the model we use and future work may arrive at different conclusions if position dependencies are integrated.", "Based on this thorough exploration, we focus on experimenting with position-based objectives with MML for the rest of this paper.", "In this subsection, we compare probability space and distant supervision assumptions.", "Table 3 shows the dev set results, where the upper section compares paragraph-level models ( H1 P , H2 P ), and the lower section compares document-level models ( H1 D , H2 D , H3 D ).", "The performance of models with both Max and Sum inference is shown.", "We report F1 and Exact Match (EM) scores for TriviaQA, and Rouge-L scores for NarrativeQA.", "For TriviaQA, H3 D achieves significantly bet-Objective Infer TriviaQA NarrativeQA F1 EM Rouge-L Paragraph-level Models H1-P Max 67.9 63.3 55.3 Sum 70.4 66.0 53.6 H2-P Max 71.9 67.7 59.2 Sum 73.0 69.0 57.8 Document-level Models H1-D Max 55.8 51.0 59.4 Sum 65.2 61.2 59.1 H2-D Max 70.3 66.2 60.1 Sum 72.4 68.4 59.9 H3-D Max 75.1 70.6 59.1 Sum 75.3 70.8 59.2 Table 3: Comparison of distant supervision hypotheses using MML-Pos objectives on TriviaQA and NarrativeQA dev sets.", "ter results than other formulations.", "Only H3 D is capable of cleaning noise from positive paragraphs that don't have a correct answer (e.g. paragraph two in Fig. 1), by deciding which A consistent mention to trust.", "The paragraph-level models H1 -P and H2 -P outperform their corresponding document-level counterparts H1 -D and H2 -D .", "This may be due to the fact that without H3 , and without predicting NULL , D models do not learn to detect irrelevant paragraphs.", "Unlike for TriviaQA, H2 -D models achieve the best performance for NarrativeQA.", "We hypothesize this is due to the fact that positive paragraphs that don't have a correct answer are very rare in NarrativeQA (as summaries are relatively short and answer strings are human-annotated for the specific documents).", "Therefore, H3 is not needed to clean noisy supervision, and it is not useful since it also leads to a reduction in the number of positive examples (coverage) for the model.", "Here, document-level models always improve over their paragraph counterparts, by learning to calibrate paragraphs directly against each other.", "Here we study two methods to further improve weakly supervised QA models.", "First, we combine two distant supervision objectives in a multitask manner, i.e. H2 -P and H3 -D for TriviaQA, and H2 -P and H2 -D for NarrativeQA, chosen based on the results in 5.3.", "H2 objectives have higher coverage than H3 while being more susceptible Objective Clean Infer TriviaQA NarrativeQA F1 EM Rouge-L Single-objective Par X Max 71.9 67.7 59.2 Sum 73.0 69.0 57.8 (cid:88) Max 74.2 70.1 61.7 Sum 74.9 70.9 61.7 Doc X Max 75.1 70.6 60.1 Sum 75.3 70.8 59.9 (cid:88) Max 75.5 70.8 62.8 Sum 75.5 70.9 62.9 Multi-objective Par + Doc X Max 75.6 71.2 60.5 Sum 75.9 71.6 60.5 (cid:88) Max 75.8 71.2 63.0 Sum 76.2 71.7 63.1 Table 4: Dev set results comparing multi-objectives and clean supervison.", "to noise.", "Paragraph-level models have the advantage of learning to score irrelevant paragraphs (via NULL outcomes).", "Note that we use the same parameters for the two objectives and the multi-objective formulation does not have more parameters and is no less efficient than the individual models.", "Second, we use external clean supervision from SQUAD 2.0 (Rajpurkar et al., 2018) to train the BERT-based QA model for 2 epochs.", "This model matches the P probability space and is able to detect both NULL and extractive answer spans.", "The resulting network is used to initialize the models for TriviaQA and NarrativeQA.", "The results are shown in Table 4. It is not surprising that using external clean supervision improves model performance (e.g. (Min et al., 2017)).", "We note that, interestingly, this external supervision narrows the performance gap between paragraph-level and document-level models, and reduces the difference between the two inference methods.", "Table 5 reports test set results on TriviaQA and NarrativeQA for our best models, in comparison to recent state-of-art (SOTA) models.", "For TriviaQA, we report F1 and EM scores on the full test set and the verified subset.", "For NarrativeQA, Rouge-TriviaQA Wiki Full Verified F1 EM F1 EM Ours ( H2-P+H3-D ) 76.3 72.1 85.5 82.2 w/o SQUAD 75.7 71.6 83.6 79.6 (Wang et al., 2018b) 71.4 66.6 78.7 74.8 (Clark and Gardner, 2018) 68.9 64.0 72.9 68.0 (Min et al., 2019) 67.1 NarrativeQA Summary Rouge-L Ours ( H2-P+H2-D ) 62.9 w/o SQUAD 60.5 (Nishida et al., 2019) 59.9 w/o external data 54.7 (Min et al., 2019) 58.8 Table 5: Test set results on TriviaQA Wiki and NarrativeQA Summaries.", "Compared to recent TriviaQA SOTA (Wang et al., 2018b), our best models achieve 4 .", "9 F1 and 5 .", "5 EM improvement on the full test set, and 6.8 F1 and 7.4 EM improvement on the verified subset.", "On the NarrativeQA test set, we improve Rouge-L by 3.0 over (Nishida et al., 2019).", "The large improvement, even without additional fully labeled data, demonstrates the importance of selecting an appropriate probability space and interpreting the distant-supervision in a way cognizant of the properties of the data, as well as selecting a strong optimization and inference method.", "With external fully labeled data to initialize the model, performance is further significantly improved.", "In this subsection, we carry out analyses to study the relative performance of paragraph-level and document-level models, depending on the size of answer string set |A| and the number of A consistent spans, which are hypothesized to correlate with label noise.", "We use the TriviaQA dev set and the best performing models, i.e. H2 -P and H3 D with Sum inference.", "We categorize examples based on the size of their answer string set, |A| , and the size of their corresponding set of A -consistent spans, |I| .", "Specifically, we divide the data into 4 subsets and Subset |A| |I| Size H2-P H3-D Q ss = 1 5 2585 66.8 67.4 0.6 Q ls > 1 5 853 68.7 70.1 1.4 Q sl = 1 > 5 1149 82.0 84.9 2.9 Q ll > 1 > 5 3034 86.3 88.4 2.1 Table 6: F1 scores on 4 subsets of TriviaQA dev, grouped by the size of their answer string sets A and corresponding set of possible mentions I .", "report performance separately on each subset, as shown in Table 6.", "In general, we expect Q sl and Q ll to be noisier due to the larger I , where Q sl potentially includes many irrelevant mentions while Q ll likely contains more incorrect answer strings (false aliases).", "We can observe that the improvement is more significant for these noisier subsets, suggesting document-level modeling is crucial for handling both types of label noise.", "Distant supervision has been successfully used for decades for information extraction tasks such as entity tagging and relation extraction (Craven and Kumlien, 1999; Mintz et al., 2009).", "Several ways have been proposed to learn with DS, e.g., multi-label multi-instance learning (Surdeanu et al., 2012), assuming at least one supporting evidence (Hoffmann et al., 2011), integration of label-specific priors (Ritter et al., 2013), and adaption to shifted label distributions (Ye et al., 2019).", "Recent work has started to explore distant supervision to scale up QA systems, particularly for open-domain QA where the evidence has to be retrieved rather than given as input.", "Reading comprehension (RC) with evidence retrieved from information retrieval systems establishes a weakly-supervised QA setting due to the noise in the heuristics-based span labels (Chen et al., 2017; Joshi et al., 2017; Dunn et al., 2017; Dhingra et al., 2017).", "One line of work jointly learns RC and evidence ranking using either a pipeline system (Wang et al., 2018a; Lee et al., 2018; Kratzwald and Feuerriegel, 2018) or an end-to-end model (Lee et al., 2019).", "Another line of work focuses on improving distantly-supervised RC models by developing learning methods and model architectures that can better use noisy labels.", "Clark and Gardner (2018) propose a paragraph-pair ranking objective, which has components of both our H2-P and H3-D position-based formulations.", "They don't explore multiple inference methods or combinations of objectives and use less powerful representations.", "In (Lin et al., 2018), a coarse-to-fine model is proposed to handle label noise by aggregating information from relevant paragraphs and then extracting answers from selected ones.", "Min et al. (2019) propose a hard EM learning scheme which we included in our experimental evaluation.", "Our work focuses on examining probabilistic assumptions for document-level extractive QA.", "We provide a unified view of multiple methods in terms of their probability space and distant supervision assumptions and evaluate the impact of their components in combination with optimization and inference methods.", "To the best of our knowledge, the three DS hypotheses along with position and span-based interpretations have not been formalized and experimentally compared on multiple datasets.", "In addition, the multi-objective formulation is new.", "In this paper, we demonstrated that the choice of probability space and interpretation of the distant supervision signal for document-level QA have a large impact, and that they interact.", "Depending on the properties of the data, different configurations are best, and a combined multi-objective formulation can reap the benefits of its constituents.", "A future direction is to extend this work to question answering tasks that require reasoning over multiple documents, e.g., open-domain QA.", "In addition, the findings may generalize to other tasks, e.g., corpus-level distantly-supervised relation extraction.", "Some of the ideas in this work originated from Hao Cheng's internship with Google Research.", "We would like to thank Ankur Parikh, Michael Collins, and William Cohen for discussion and detailed feedback on this work, as well as other members from the Google Research Language team and the anonymous reviewers for valuable suggestions.", "We would also like to thank Sewon Min for generously sharing the processed data and evaluation script for NarrativeQA." ]
[ "method", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "objective", "result", "method", "abstain", "result", "other", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "other", "other", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "method", "method", "objective", "other", "objective", "abstain", "objective", "abstain", "other", "other", "other" ]
[ "We introduce SUMMSCREEN , a summarization dataset comprised of pairs of TV series transcripts and human written recaps.", "The dataset provides a challenging testbed for abstractive summarization for several reasons.", "Plot details are often expressed indirectly in character dialogues and may be scattered across the entirety of the transcript.", "These details must be found and integrated to form the succinct plot descriptions in the recaps.", "Also, TV scripts contain content that does not directly pertain to the central plot but rather serves to develop characters or provide comic relief.", "This information is rarely contained in recaps.", "Since characters are fundamental to TV series, we also propose two entity-centric evaluation metrics.", "Empirically, we characterize the dataset by evaluating several methods, including neural models and those based on nearest neighbors.", "An oracle extractive approach outperforms all benchmarked models according to automatic metrics, showing that the neural models are unable to fully exploit the input transcripts.", "Human evaluation and qualitative analysis reveal that our non-oracle models are competitive with their oracle counterparts in terms of generating faithful plot events and can benefit from better content selectors.", "Both oracle and non-oracle models generate unfaithful facts, suggesting future research directions.", "1 1 Introduction Abstractive summarization aims to produce a summary that concisely expresses key points of the input document rather than merely extracting pieces of it.", "Existing datasets are constructed from various domains, such as news (Sandhaus, 2008; Hermann Work done while the author was at the University of Chicago.", "[ The apartment ] Sheldon : What color would you like to be ?", "Leonard : Well , I 'd like to be green , but you know you always take it .", "Sheldon : That 's not true .", "Any color 's fine with me .", "Yeah , I could be a a combination of blue and yellow .", "Leonard : Blue and yellow make green .", "Sheldon : Well , then it 's settled .", "Penny : Hi .", "Ready to go ?", "Sheldon : Oh , good news , we ordered lunch , so we can all stay here and play Lord of the Rings Risk .", "Amy : Sheldon , we said that we would play games with you tonight .", "Sheldon : Oh , no , we 'll still be playing it tonight , this game can easily take eight hours .", "Penny : Sweetie , you really thought I 'd want to do this ?", "Leonard : No .", "Penny : Well , did you tell him that ?", "Leonard : Yes .", "Penny : Did you say it out loud with words ?", "Leonard : No .", "Penny : I do n't want to spend the whole day playing a board game .", "Sheldon and Leonard are happy playing a board game until Amy and Penny say they are tired of doing what the guys want Transcript: Recap: Figure 1: Excerpts from an example from SUMMSCREEN .", "et al., 2015; Rush et al., 2015; Narayan et al., 2018; Grusky et al., 2018), online forums (Vlske et al., 2017), meeting dialogues (Janin et al., 2003; Carletta et al., 2005), and webpages (Chen et al., 2020).", "However, few datasets exist for abstractive summarization of narrative text, which focuses on entities and dialogue among entities, with plot details often communicated indirectly via dialogue.", "In this work, we build SUMMSCREEN , an abstractive summarization dataset combining TV series transcripts and episode recaps.", "Figure 1 shows an example from SUMMSCREEN .", "Several aspects of SUMMSCREEN make it a challenging testbed for abstractive summarization.", "First, the relationship between character dialogue and plot details is not straightforward.", "Plot events 8602 are often expressed indirectly in dialogue, and dialogue contains other information that is not directly relevant to the plot, such as character development and humor.", "Also, a typical episode has multiple subplots that proceed in parallel, with consecutive scenes often describing different subplots.", "Solving SUMMSCREEN requires drawing information from utterances across a wide range of the input and integrating the information to form concise plot descriptions.", "Moreover, since actual TV episodes ground their scripts with audio-visual accompaniment, many details may be omitted from the transcript itself.", "This omission of details and the other challenging aspects mentioned above have inspired research into other NLP tasks on TV show transcripts, such as entity tracking (Chen and Choi, 2016; Choi and Chen, 2018) and coreference resolution (Chen et al., 2017; Zhou and Choi, 2018).", "Another prominent characteristic of TV series transcripts is their focus on characters.", "To reflect this aspect, we propose two entity-centric metrics to evaluate the quality of generated plot summaries.", "One is based on bags of characters, which measures the overlap of the characters that appear in both the generated and reference recaps.", "The other metric measures character relations: the overlap of cooccurrences of character pairs in generations and recaps.", "We empirically evaluate several types of methods on SUMMSCREEN .", "We consider nearest neighbor models, which look up similar transcripts or recaps, neural abstractive summarization models, and hybrid models, which use the nearest neighbor models as content selectors followed by abstractive summarization.", "Oracle extractive approaches outperform all models on all the automatic metrics.", "These results suggest that the benchmarked methods are unable to fully exploit the input transcripts and that improving content selection may be a promising research direction.", "Human evaluations show that our non-oracle hybrid models are competitive with their oracle counterparts in terms of generating faithful plot events.", "Hybrid models may be promising approaches for future research.", "Qualitative analysis shows that neural models tend to generate generic summaries, hybrid models can benefit from better content selection, and hybrid models sometimes generate unfaithful details.", "There has been prior work on extractive screenplay summarization (Gorinski and Lapata, 2015; Papalampidi et al., 2020), and analyzing crime drama (Frermann et al., 2018).", "The majority of TV show transcripts are dialogues, relating our work to prior work on dialogue and meeting summarization.", "Relevant datasets have been studied for medical dialogues (Joshi et al., 2020; Krishna et al., 2021), chitchat (SAMSum; Gliwa et al., 2019), podcasts (Clifton et al., 2020), meetings (AMI; Carletta et al., 2005; ICSI; Janin et al., 2003; QMSum; Zhong et al., 2021), livestreams (StreamHover; Cho et al., 2021), online forums (ForumSum; Khalman et al., 2021) and news interviews (MediaSum; Zhu et al., 2021).", "There have been attempts in summarizing long-form text (other than screenplays), such as books (Mihalcea and Ceylan, 2007), scientific articles (PubMed and arXiv; Cohan et al., 2018), multiple news articles (Multi-News; Fabbri et al., 2019), opinionated text (RottenTomatoes; Wang and Ling, 2016), patents (Sharma et al., 2019), TV show stories (TVRecap; Chen and Gimpel, 2021) and (extractive summarization of) chapters of novels (Ladhak et al., 2020).", "More detailed discussion on the differences between these datasets and SUMMSCREEN is in the next section.", "Recently there have been efforts on adapting resources for TV shows for different tasks, including question answering (Ma et al., 2018; Yang and Choi, 2019), speaker identification (Ma et al., 2017), sarcasm detection (Joshi et al., 2016), emotion detection (Zahiri and Choi, 2017; Hsu and Ku, 2018), character relation extraction (Yu et al., 2020), and story generation (Chen and Gimpel, 2021).", "An instance in SUMMSCREEN contains a transcript from TV series and its corresponding recap.", "The transcripts consist of dialogue utterances with speaker names, and descriptions of scenes or character actions.", "The recaps are human-written summaries of the corresponding transcripts.", "Figure 1 shows an example in SUMMSCREEN from the TV show The Big Bang Theory.", "The transcript documents a dialogue involving four characters (Sheldon, Leonard, Penny, and Amy) about playing a board game, and the recap summarizes the dialogue into sentences.", "We use two sources to construct SUMMSCREEN : The TV MegaSite, Inc. (TMS) 2 and ForeverDreaming (FD), 3 both of which provide community-contributed transcripts.", "As FD does not provide recaps, we obtain recaps of FD shows from Wikipedia and TVMaze.", "4 To ensure dataset quality of SUMMSCREEN , we filter out instances based on two criteria.", "First, the overlap ratio of TV show characters appearing in the recap and its transcript should be higher than 85%.", "We use this criterion to ensure that the alignments between recaps and transcripts are correct.", "Second, the number of transcript lines that have speaker information (char-acter utterances) should be larger than 100.", "We use this criterion to eliminate transcripts that are essentially subtitles, i.e., utterances without speaker information.", "In practice, for each transcript line, if a colon symbol appears in the first 8 tokens and there exists at least one character name in front of the colon symbol, we will count it as a character utterance.", "We note that FD and TMS do not have overlapping TV series.", "In Table 1, we compute n-gram overlap ratios between recaps and transcripts for measuring the abstractiveness of SUMMSCREEN .", "From the results, We find that despite SUMMSCREEN has longer summaries, its fraction of overlapping four-gram is comparable to XSum (Narayan et al., 2018) which is known for abstractiveness, suggesting that SUMMSCREEN favors abstractive approaches.", "Table 2 shows data statistics and Figure 2 shows 2 http://tvmegasite.net/ 3 transcripts.foreverdreaming.org 4 www.tvmaze.com , an online TV database curated by TV fans.", "the genres of the TV shows from the two sources.", "5 When computing the number of unique characters in TV shows, we first collect the character names from TVMaze and the named entities 6 preceding the colon symbols in transcripts.", "We then perform string matching to obtain numbers of TV show characters in recaps and transcripts.", "From these two tables, we observe that FD and TMS are different in many aspects.", "First, FD covers more diverse genres than TMS.", "This is partly due to the fact that TV shows from TMS are soap operas.", "Second, transcripts from FD are longer, which is caused by the fact that the transcripts from FD tend to have more descriptions about environments or character actions, whereas the ones from TMS are mostly 5 The genre information is from TVMaze where a TV show may correspond to multiple genres.", "made up of dialogue (see Table 2).", "Third, recaps from FD are shorter whereas recaps from TMS seek to cover more details.", "Fourth, writing styles are more diverse in FD than those in TMS.", "In light of these differences, we treat FD and TMS as different datasets in the following experiments.", "We create train/dev/test splits for FD and TMS by ensuring the ratio to be roughly 10:1:1, and filter out instances in the dev/test splits if the reference texts are shorter than 30 word tokens.", "The statistics of the splits are shown in Table", "3. 3.2 Dataset Comparison We compare SUMMSCREEN to other abstractive dialogue summarization datasets in Table", "4. SUMMSCREEN differs from other datasets in several ways:", "1. Compared to recently proposed large-scale dialogue summarization datasets (i.e., SAMsum and MediaSUM), SUMMSCREEN has longer source inputs.", "2. Compared to other dialogue summarization datasets, SUMMSCREEN has larger numbers of speakers per instance.", "The TV series genre focuses on narrative, which is typically entity-centric and can include multiple parallel subplots in a single episode.", "3. Compared to AMI, ICSI and QMSum, which are long-input meeting summarization datasets, SUMMSCREEN has far more instances.", "4. Unlike most of the other datasets, SUMMSCREEN contains many episodes of a single show (e.g., more than 3k episodes for TMS).", "This episodic structure could be used to model character arcs, the evolution of character personality traits and character relationships over episodes, among others.", "Properties (1) and (2) above make extracting information from transcripts more challenging than other datasets.", "The third property means that SUMMSCREEN is large enough to train and evaluate neural methods.", "The Spotify Podcast Dataset (Clifton et al., 2020) and StreamHover (Cho et al., 2021) are similar to SUMMSCREEN in that they contain transcribed speech and summaries.", "However, the transcriptions are obtained automatically and therefore contain errors.", "7 The datasets therefore involve speech processing (or at least handling speech recognition errors) compared to SUMMSCREEN , which has human-written transcripts.", "Since MediaSum is constructed from news transcripts, it is the most similar dataset in Table 4 to SUMMSCREEN .", "However, the summaries in MediaSum are twenty times shorter than those in SUMMSCREEN , and the average number of speakers per instance is only a quarter of that in SUMMSCREEN .", "Furthermore, our results in Sec. 5.2 indicate that our dataset is much harder than MediaSum as the pretrained models perform worse on our dataset than on MediaSum according to automatic metrics.", "More detailed analysis is in the next subsection.", "In this subsection, we qualitatively analyze the challenging aspects of SUMMSCREEN .", "Since the transcripts focus on dialogue among characters, along with limited descriptions of scenes and actions, it leads to the challenge that plot information is not stated explicitly but rather only implied in the dialogue.", "For example, the transcript in Figure 1 does not explicitly describe what Sheldon and Leonard are playing.", "However, it is implied by Sheldon when he mentions playing Lord of the Rings Risk, and later by Penny when she says that she does not want to spend the whole day playing a board game.", "A related challenge is the need to understand the context in which characters' utterances are situated.", "In the example, the recap describes four characters taking sides regarding playing a board game.", "The transcript expresses the characters' sentiments through their interactions with one another.", "The conflict does not occur until Sheldon proposes to stay here and play Lord of the Rings Risk, and it becomes more apparent when Penny mentions she does not want to play the board game.", "Given the context, Leonard's series of yes and no responses to Penny's questions is largely due to the awkward sit-7 For this reason, we do not include their statistics in Table", "4. 8605 # instances # tokens (input) # tokens (summary) # speakers Domain Multi-News 56.2k 2103.5 264.7 News RottenTomatoes 3.7k 2124.7 22.2 Reviews arXiv 215k 4938.0 220.0 Science PubMed 113k 3016.0 203.0 Science GovReport 19.5k 9409.4 553.4 Government Reports TVRecap 29.0k 1868.7 221.6 Television Series SAMSum 16.4k 83.9 20.3 2.2 Chitchat ForumSum 4.1k 303.5 36.0 6.7 Forum Messages MediaSum 463.6k 1553.7 14.4 6.5 News Interviews AMI 137 4757.0 322.0 4.0 Meetings ICSI 59 10189.0 534.0 6.2 Meetings QMSum 1.8k 9069.8 69.6 9.2 Meetings SUMMSCREEN 26.9k 6612.5 337.4 28.3 Television Series Table 4: Statistics for datasets focusing on abstractive summarization for long-form text or dialogue.", "uation, and it actually shows that he is happy playing the game as he and Sheldon are doing so at the beginning of the scene.", "Similarly, Amy mentions their previous agreement with Sheldon as a way of politely declining Sheldon's plan.", "The sentiments of characters are not necessarily easily discernible from their utterances but rather must be inferred using context and knowledge about the characters.", "Another challenge in SUMMSCREEN is the need to draw information from a wide range of the input transcripts, which arises for two primary reasons.", "First, there are many utterances that serve a purpose other than driving the plot forward.", "They may help to develop characters or character relationships, or to add humor or suspense.", "These lines enrich the narrative but their information content is often omitted from the summaries.", "For example, in the first instance in Figure 3, we show key lines from the transcript that pertain to the excerpt of the summary.", "There are many other lines between the lines shown, which are conversations between the doctor and other characters.", "This property necessitates the models' ability to correctly attend to major events across the transcript when generating summaries.", "The pattern can also be observed in Table 2 through the differences between the number of unique characters in recaps and transcripts.", "More than half of the characters in the transcripts are not contained in the recaps.", "The second reason why information needs to be combined across wide ranges of the input relates to scene breaks and multiple plots.", "As a TV show often narrates a few plots in parallel, scene breaks are used to separate the stories.", "The discontinuity sometimes requires models to connect subplots hundreds of lines apart.", "For example, for the second instance in Figure 3, the show uses scene breaks to express what is happening when Cullen 8606 Bohannon escapes from the Swede, which is why there are almost two hundred lines between Cullen Bohannon's escape and his appearance at Durant's office.", "In this section, we describe modeling approaches that we benchmark on SUMMSCREEN .", "We note that since the meaning of sentences in transcripts is highly context-dependent, extractive summarization approaches are not expected to be useful for this dataset.", "We report the results from nearest neighbor-based extractive summarizers mostly for characterizing the dataset.", "We use transformer based sequence-to-sequence architectures (Vaswani et al., 2017).", "Because transcripts are quite long, We limit the number of encoder hidden vectors that are used in the decoder's attention mechanism.", "To do so, when encoding transcripts, we first append a special token [EOS] to each line of the transcript, and then linearize the transcript.", "We then only feed the vectors representing these special tokens to the decoder.", "We use the Longformer (Beltagy et al., 2020) as our encoder architecture, and set the [EOS] tokens to use global attention.", "For our decoders, we use the standard transformer architecture.", "We consider two metrics when finding nearest neighbors: BM25 (Robertson et al., 1995) (a popular metric for information retrieval), and ROUGE scores (Lin, 2004).", "We use ROUGE scores as they are used for evaluation, and we use BM25 because it is designed for retrieving long documents whereas ROUGE scores are not.", "When using ROUGE scores, we use the average of ROUGE-1, ROUGE-2, and ROUGE-L.", "We consider three types of nearest neighbor search: transcript-to-transcript, recap-to-transcript, and recap-to-recap.", "Recap-to-transcript (NNM-r2t).", "We use each sentence in the recap as queries and the lines in the corresponding transcript as candidates.", "The generation is formed by the nearest neighbors for each sentence.", "We use BM25 or ROUGE scores as the metric.", "This method can serve as an oracle result for an extractive summarization system, showing roughly how much information can be extracted at the utterance level from the source transcript.", "Transcript-to-transcript (NNM-t2t).", "We use the transcripts in the test sets as queries, the transcripts in the training sets as candidates, and then find the nearest neighbors using BM25.", "The generations are the corresponding recaps.", "This baseline measures the similarity of instances between training and test splits.", "Recap-to-recap (NNM-r2r).", "This setting is similar to the transcript-to-transcript setting, but we use recaps for both queries and candidates, and we use ROUGE and our proposed entity-centric scores (see Sec. 5.1 for more details) as the metric.", "When using the entity metrics, we use the average of the 4 metric scores.", "This is an oracle baseline of the transcript-to-transcript setting and also measures the similarity of the splits.", "As content selection has been shown to be helpful in prior work (Gehrmann et al., 2018; Liu et al., 2018), we use the recap-to-transcript nearest neighbor model and BM25 as the metric to select the most salient content from transcripts, and then apply neural models to the selected content when performing generation.", "As these methods combine nearest neighbor models and neural models, we refer to them as hybrid models.", "In particular, for each sentence in the recap, we find the top three most similar lines in the transcript, include two extra lines that come before or after the selected lines as context, and also include a line that is retrieved by using the whole recap.", "As the selected content is significantly shorter than the original transcript, it allows us to use pretrained models.", "8 Therefore, in this setting, we fine-tune a pretrained BART-large model (Lewis et al., 2020).", "We note that as the nearest neighbor models rely on the gold standard recaps, this hybrid model demonstrates an approximate upper bound on performance when using powerful content selectors.", "9 To establish a non-oracle baseline, we train neural models to predict the selected lines, and then fine-tune BART-large models on the predicted lines.", "Details of the architecture for this component, which we call our neural content selector, are in the appendix.", "8 After the selection steps, the average number of tokens of the transcripts for FD and TMS reduces to 1138.9 and 3252.7 respectively.", "9 We use the maximum sequence length of 1024 (i.e., we truncate the input sequences if they are longer than 1024) for BART-large due to computational constraints.", "We report BLEU (Papineni et al., 2002), ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL).", "We report the average of these four metrics as it generally shows the semantic similarities between generations and references.", "We will refer to these metrics as generic metrics as they treat each word equally.", "As characters are fundamental to TV show plots, we believe the quality of plot summaries also depends on including the right characters.", "To take this factor into account, we compute several bag of character (BoC) metrics based on the fraction of the overlapping characters between generated and gold standard recaps.", "Formally, we define the BoC precision to be | f ( generation )& f ( r ) | | f ( generation ) | (1) where f is a function that extracts the bag of characters from some text, where we perform string matching based on the character names that are automatically extracted during dataset construction (see Sec. 3.1), & computes the intersection of two bags, | | returns the size of its inputs, and r is the gold standard recap.", "Similarly, we define the BoC recall to be | f ( generation )& f ( r ) | | f ( r ) | (2) Since BoC does not consider relations between characters, we also report bag of character relations (BoR) metrics based on the cooccurrence of character pairs.", "We assume two characters are related when they appear in the same sentence.", "After obtaining the character relations from the gold standard recaps and the generations, we compute recall and precision against the recaps following the same approach as BoC.", "We note that the extracted relations are non-directional, and BoR does not consider frequency of the cooccurrences.", "We also report the averages of both precisions and recalls from both the BoC and BoR metrics.", "We report test results for FD and TMS in Table", "5. Our findings for the nearest neighbor models are as follows:", "1. We find that the nearest neighbor models give strong performance on our dataset.", "In particular, NNM-r2t shows the best performance in general.", "This demonstrates that there is still room for improving the ability of our neural models to extract the most useful information from transcripts, suggesting that improved transcript modeling may be a fruitful research direction for these datasets.", "2. We observe that NNM-r2r exhibits different strengths when based on different metrics, for example, using ROUGE scores will lead to results favorable to generic metrics.", "As for the results involving neural models, our findings are as follows:", "1. The neural model shows strong performance in generic semantic matching but it is relatively weak in entity metrics compared to the non-oracle baselines.", "(see the appendix for more discussion).", "2. The hybrid model is better than the neural model in terms of generating character mentions and relations.", "With the help of the oracle content selector, the hybrid model improves significantly in both semantic matching and entity-related metrics, showing that future research may find improvement by designing better content selectors.", "We study the effect of transfer learning using these two resources.", "When doing so, we use the training and development sets constructed from both resources, and at test time, we evaluate models on the official test splits.", "We experiment with the oracle hybrid model and report results in Table", "6. In general, we find that extra training data helps FD.", "We hypothesize that this is due to the relatively small size of FD.", "However, for TMS, training on FD harms performance, which is likely because of the larger training set size for TMS and the differences between the two resources.", "We conduct human evaluations for three models: NNM-t2t, hybrid model, and hybrid model (oracle).", "To evaluate two key aspects of SUMMSCREEN , namely events and characters relationships, we ask human annotators two questions.", "The first is Do 8608 Generic Metrics Entity Metrics BLEU R1 R2 RL avg. BoC-p BoC-r BoR-p BoR-r avg. ForeverDreaming NNM-r2t (oracle, BM25) 3.4 34.3 6.6 29.6 18.5 70.5 61.9 36.4 16.1 46.2 NNM-r2t (oracle, RG) 3.9 34.8 8.5 31.5 19.7 76.7 63.3 46.5 21.3 52.0 NNM-r2r (oracle, RG) 9.9 38.8 11.5 33.9 23.5 50.6 51.4 24.6 26.8 38.4 NNM-r2r (oracle, Entity Metrics) 5.5 31.1 6.8 27.1 17.6 58.6 79.6 26.4 43.7 52.1 NNM-t2t 7.9 31.3 7.8 27.4 18.6 56.5 59.2 28.2 29.4 43.3 Neural model 2.6 25.9 4.2 23.8 14.1 54.7 38.5 22.8 15.1 32.8 Hybrid model 2.4 25.3 3.9 23.1 13.7 61.2 51.4 29.8 23.6 41.5 Hybrid model (oracle) 3.0 26.4 5.0 23.3 14.4 70.0 57.8 36.9 29.1 48.5 TVMegaSite NNM-r2t (oracle, BM25) 6.7 45.0 10.2 43.0 26.2 82.5 80.4 57.7 18.1 59.7 NNM-r2t (oracle, RG) 8.5 44.1 11.7 42.4 26.7 85.2 76.8 61.2 16.9 60.0 NNM-r2r (oracle, RG) 7.9 49.0 11.6 46.9 28.9 59.2 59.0 29.5 29.9 44.4 NNM-r2r (oracle, Entity Metrics) 4.9 42.8 8.8 40.4 24.2 60.8 81.7 26.0 37.5 51.5 NNM-t2t 6.2 43.2 8.6 41.4 24.9 63.2 69.3 31.8 35.3 49.9 Neural model 7.9 42.9 11.9 41.6 26.1 86.1 48.7 48.9 22.3 51.5 Hybrid model 5.5 38.8 10.2 36.9 22.8 84.5 57.2 51.0 29.3 55.5 Hybrid model (oracle) 8.9 42.1 11.9 40.9 25.9 84.0 69.5 56.4 36.8 61.7 Table 5: Results on the SUMMSCREEN test sets. BLEU, R1, R2, and RL are BLEU and ROUGE scores between model generated and reference recaps. Bo{C,R}-{p,r} are precision and recall for bag of characters and bag of character relations, respectively. The highest numbers for each dataset in each column are in bold. Generic Entity ForeverDreaming FD Only 16.5 47.3 TMS + FD 16.9 50.1 TVMegaSite TMS Only 25.9 61.7 TMS + FD 23.2 58.0 Table 6: Results of the oracle hybrid model comparing training on both datasets (TMS + FD) to training on the in-domain dataset only. The metrics are averaged scores of the generic and entity metrics. Training on both datasets helps for FD but hurts for TMS. Predicates Character Relation NNM-t2t 1.6 0.8 2.1 1.1 Hybrid model 2.3 0.9 2.0 1.0 Hybrid model (oracle) 2.4 1.0 2.4 1.0 Table 7: Human evaluation results. We report the average scores and their corresponding standard deviations for questions on predicate match and character relation similarity. the predicates in the generation match the predicates in the reference? 10 The second is When multiple characters are mentioned as being related in some way in the generated recap, are those same characters mentioned as being related in some way in the reference?", "We disregard the subjects in the first question because the second question involves evaluating characters and we want the two questions to focus on different aspects to maximize the 10 By predicate here we mean the part of a sentence or clause containing a verb and stating something about the subject (e.g., went home in John went home).", "efficiency of human annotations.", "Ratings are on a 1-5 scale with 5 indicating a perfect match.", "We randomly picked instances from the FD test set.", "We (the authors) annotated 120 instances in total for each question.", "After dropping 2 invalid annotations for the second question (as there may not be multiple characters mentioned), we summarize results in Table", "7. While trends for the model performance on character relations are generally similar to our observations in Table 5, the results for predicate match are very different for NNM-t2t.", "This is likely because the first question is about predicates disregarding the correctness of the participants.", "We also want to highlight that compared to the oracle hybrid model, the non-oracle one shows competitive performance on predicate match but is less close in terms of generating correct character relations, showing future opportunities for improving this model.", "In Table 8, we show generation samples for the following models: the neural model, the hybrid model, and the oracle hybrid model.", "The neural model manages to fit most of the character names from the reference into the generation.", "The generation shares similar topics with the reference, but compared to the hybrid models it lacks specifics.", "This matches our observations from the automatics metrics where the neural model performs better on the generic metrics but worse on the entity metrics on the non-anonymized datasets.", "We hypothesize 8609 Reference The remains of two witches , one of which is from the Salem witch trials from the 1600s and the other a modern day Wiccan , are discovered in the remains of a burnt out cabin .", "that this is caused by the difficulty of modeling long-form text.", "In the output of the non-oracle hybrid model, many facts that are not mentioned in the reference are actually from the transcript.", "For example, 40-year-old woman and de-fleshed prior to her death are in the transcript.", "Despite containing many specifics, the generation misses a few important details, such as the absence of mentioning main characters involved (i.e., Brennan and Booth).", "It also has incorrect facts.", "For example, according to the transcript, there are rocks at the scene, but the model describes the setting as a rock quarry.", "Compared to the other three models, the generation from the oracle hybrid model is the most faithful, although there are still incorrect facts (e.g., ... and they are having a hard time adjusting to their new life in the city.).", "The differences between the oracle and non-oracle hybrid model suggest that future research can focus on improving models' capabilities of doing content selection.", "As both oracle and non-oracle hybrid models suffer from generating incorrect facts, faithfulness in generation is also an important future research direction.", "We construct SUMMSCREEN , which contains pairs of TV show transcripts and recaps.", "We qualitatively analyze the challenging aspects of our dataset.", "We propose two entity-centric metrics to evaluate generated summaries with one focusing on character overlap and the other focusing on overlap of character pairs.", "Empirically, we benchmark several neural models and nearest neighbor models for characterizing our datasets, finding that an oracle extractive summarizer gives the strongest performance according to the automatic metrics.", "Human evaluations show that the non-oracle hybrid model is competitive at generating faithful topics.", "Qualitative analysis shows that the hybrid model can benefit from better content selectors and both oracle and non-oracle hybrid models suffer from generating inaccurate details, highlighting several directions for future research.", "We wish to thank The TV MegaSite, Inc. and Forever Dreaming for allowing us to use and redistribute their data for research purposes.", "This work was supported in part by a Google Fellowship to M. Chen." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "method", "abstain", "method", "abstain", "method", "objective", "abstain", "method", "result", "other", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "result", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "objective", "result", "abstain", "abstain", "other", "other" ]
[ "Solving crossword puzzles requires diverse reasoning capabilities, access to a vast amount of knowledge about language and the world, and the ability to satisfy the constraints imposed by the structure of the puzzle.", "In this work, we introduce solving crossword puzzles as a new natural language understanding task.", "We release a corpus of crossword puzzles collected from the New York Times daily crossword spanning 25 years and comprised of a total of around nine thousand puzzles.", "These puzzles include a diverse set of clues: historic, factual, word meaning, syn-onyms/antonyms, fill-in-the-blank, abbreviations, prefixes/suffixes, wordplay, and cross-lingual, as well as clues that depend on the answers to other clues.", "We separately release the clue-answer pairs from these puzzles as an open-domain question answering dataset containing over half a million unique clue-answer pairs.", "For the question answering task, our baselines include several sequence-to-sequence and retrieval-based generative models.", "We also introduce a non-parametric constraint satisfaction baseline for solving the entire crossword puzzle.", "Finally, we propose an evaluation framework which consists of several complementary performance metrics.", "Recent breakthroughs in NLP established high standards for the performance of machine learning methods across a variety of tasks.", "However, even state-of-the-art models demonstrate fragility (Wal-lace et al., 2019) and exhibit sensitivity to shallow data patterns (McCoy et al., 2019; Zellers et al., 2019; Jin et al., 2020; Si et al., 2019; Sugawara et al., 2020; Yogatama et al., 2019; Niven and Kao, 2019).", "This has led to a growing demand for successively more challenging tasks.", "different aspects of this task (Yang et al., 2018; Rajpurkar et al., 2016; Kwiatkowski et al., 2019a; Zellers et al., 2019; Dua et al., 2019; Rogers et al., 2021).", "There are two main forms of question answering (QA): extractive QA and open-domain QA.", "In extractive QA, a passage that answers the question is provided as input to the system along with the question.", "In open-domain QA, only the question is provided as input, and the answer must be generated either through memorized knowledge or via some form of explicit information retrieval over a large text collection which may contain answers.", "The task of answering clues in a crossword is a form of open-domain question answering.", "Once a human or an open-domain QA system generates a few possible answer candidates for each clue, one of these candidates may form the correct answer to a word slot in the crossword grid, if the candidate meets the constraints of the crossword grid.", "Solving a crossword puzzle is therefore a challenging task which requires (1) finding answers to a variety of clues that require extensive language and world knowledge, and (2) the ability to produce answer strings that meet the constraints of the crossword grid, including length of word slots and character overlap with other answers in the puzzle.", "Our contributions in this work are as follows: We introduce a new natural language understanding task of solving crossword puzzles, along with a dataset of New York Times crosswords from Dec. 1, 1993 to Dec. 31, 2018.", "We propose an evaluation framework which consists of several complementary performance metrics.", "We release the collection of clue-answer pairs as a new open-domain QA dataset.", "We provide baselines for the proposed crossword task and the new QA task, including several sequence-to-sequence and retrieval-augmented generative Transformer models, with a constraint satisfaction crossword solver.", "Our work is in line with open-domain QA benchmarks.", "Examples of such tasks include datasets where each question can be answered using information contained in a relevant Wikipedia article (Yang et al., 2015; Kwiatkowski et al., 2019a; Yang et al., 2018).", "Several QA tasks have been designed to require multi-hop reasoning over structured knowledge bases (Berant et al., 2013; Bordes et al., 2015).", "The main limitation of such datasets is that their question types are mostly factual.", "Crossword clues differ from these efforts in that they combine a variety of different reasoning types.", "Another line of research that is relevant to our work explores the problem of solving Sudoku puzzles since it is also a constraint satisfaction problem.", "Most sudoku puzzles can be efficiently solved by algorithms that take advantage of the fixed input size and do not rely on machine learning methods (Si-monis, 2005).", "The machine learning attempts for solving Sudoku puzzles have been inspired by convolutional (Mehta, 2021) and recurrent relational networks (Palm et al., 2017).", "Unlike Sudoku, however, where the grids have the same structure, shape and constraints, crossword puzzles have arbitrary shape and internal structure and rely on answers to natural language questions that require reasoning over different kinds of world knowledge.", "Several previous studies have treated crossword puzzle solving as a constraint satisfaction problem (CSP) (Littman et al., 2002; Ernandes et al., 2005; Ginsberg, 2011).", "Littman et al. (2002)'s Proverb system incorporates a variety of information retrieval modules to generate candidate answers.", "The Database module searches a large database of historical clue-answer pairs to retrieve the answer candidates.", "They find very poor crossword-solving performance in ablation experiments where they limit their answer candidate generator modules to not use historical clue-answer databases.", "WebCrow (Ernan-des et al., 2005) builds upon Proverb and makes improvements to the database retriever module augmented with a new web module which searches the web for snippets that may contain answers.", "It allows partial matching to retrieve clues-answer pairs in the historical database that do not perfectly overlap with the query clue.", "Dr. Fill system proposed by Ginsberg (2011) treats each crossword puzzle as a singly-weighted CSP.", "Similarly to prior work, Dr. Fill relies on a large set of historical clue-answer pairs (up to 5M) collected over multiple years from the past puzzles by applying direct lookup and a variety of heuristics.", "One common design aspect of all these solvers is to generate answer candidates independently from the crossword structure and later use a separate puzzle solver to fill in the actual grid.", "In our work, we partition the task of 2649 crossword solving similarly.", "Barlacchi et al. (2014) and Severyn et al. (2015) observe that the most important source of candidate answers for a given clue is a large database of historical clue-answer pairs and introduce methods to better search these databases.", "Barlacchi et al. (2014) apply a BM25 retrieval model to generate clue lists similar to the query clue from historical clue-answer database, where the generated clues get further refined through application of re-ranking models.", "Severyn et al. (2015) introduce a distributional neural network to compute similarities between clues trained over a large scale dataset of clues that they introduce.", "In contrast to the previous work, our goal in this work is to motivate solver systems to generate answers organically, just like a human might, rather than obtain answers via the lookup in historical clue-answer databases.", "The answers could be generated either from memory of having read something relevant, using world knowledge and language understanding, or by searching encyclopedic sources such as Wikipedia or a dictionary with relevant queries.", "For the purposes of our task, crosswords are defined as word puzzles with a given rectangular grid of whiteand black-shaded squares.", "The goal is to fill the white squares with letters, forming words or phrases by solving textual clues which lead to the answers.", "The answer words and phrases are placed in the grid from left to right (\"Across\") and from top to bottom (\"Down\").", "The shaded squares are used to separate the words or phrases.", "Usually, the white spaces and punctuation are removed from the answer phrases.", "A sample crossword puzzle is given in Figure 1.", "Note that the answers can include named entities and abbreviations, and at times require the exact grammatical form, such as the correct verb tense or the plural noun.", "Solving a crossword puzzle is a complex task that requires generating the right answer candidates and selecting those that satisfy the puzzle constraints.", "Similar to prior work, we divide the task of solving a crossword puzzle into two subtasks, to be evaluated separately.", "The first subtask can be viewed as a question answering task, where a system is trained to generate a set of candidate answers for a given clue without taking into account any interdependencies between answers.", "The second subtask involves solving the entire crossword puzzle, i.e., filling out the crossword grid with a subset of candidate answers generated in the previous step.", "The two tasks could be solved separately or in an end-to-end fashion.", "In contrast to prior work (Ernandes et al., 2005; Ginsberg, 2011), our clue-answer data is linked directly with our puzzle-solving data, so no data leakage is possible between the QA training data and the crossword-solving test data.", "In the present work, we propose a separate solver for each task.", "We provide details on the challenges of implementing an end-to-end solver in the discussion section.", "Our dataset is sourced from the New York Times, which has been featuring a daily crossword puzzle since 1942.", "We worked with daily puzzles in the date range from December 1, 1993 through December 31, 2018 inclusive.", "All the crossword puzzles in our corpus are available to play through the New York Times games website 1 .", "We release two separate specifications of the dataset corresponding to the subtasks described above: the NYT Crossword Puzzle dataset and the NYT Clue-Answer dataset.", "2 There are a few details that are specific to the NYT daily crossword.", "First, the clue and the answer must agree in tense, part of speech, and even language, so that the clue and answer could easily be substituted for each other in a sentence.", "Second, abbreviated clues indicate abbreviated answers.", "Further, clues that end in a question mark indicate a play on words in the clue or the answer.", "There are also a lot of short words that appear in crosswords much more often than in real life.", "These 3and 4-letter words, referred to as crosswordese, can be very helpful in solving the puzzles.", "Finally, every Sunday through Thursday NYT crossword puzzle has a theme, something that unites the puzzle's longest answers.", "Theme answers are always found in symmetrical places in the grid.", "Crossword Puzzle Dataset.", "The dataset consists of 9152 puzzles, split into the training, validation, and test subsets in the 80/10/10 ratio which give us 7293/922/941 puzzles in each set.", "We removed the total of 50/61 special puzzles from the validation 1 https://www.nytimes.com/crosswords 2 Details for dataset access will be made available at https://github.com/text-machine-lab/ xword_benchmark .", "We are currently finalizing the agreement with the New York Times to release this dataset.", "and test splits, respectively, because they used nonstandard rules for filling in the answers, such as L-shaped word slots or allowing cells to be filled with multiple characters (called rebus entries).", "Most NYT crossword grids have a square shape of 15 15 cells, with the exception of Sunday-released crosswords being 21 21 cells.", "Other shapes combined account for less than 3% of the data.", "The vast majority of both clues and answers are short, with over 76% of clues consisting of a single word.", "For traditional sequence-to-sequence modeling such conciseness imposes an additional challenge, as there is very little context provided to the model.", "In most puzzles, over 80% of the grid cells are filled and every character is an intersection of two answers.", "Such high answer interdependency suggests a high cost of answer mispre-diction, as errors affect a larger number of intersecting words.", "More detailed statistics on the dataset are given in Table 1.", "Clue-Answer Dataset.", "We generate an open-domain question answering dataset consisting solely of clue-answer pairs from the respective splits of the Crossword Puzzle dataset described above (including the special puzzles).", "Within each of the splits, we only keep unique clue-answer pairs and remove all duplicates.", "However, certain clues may still be shared between the puzzles contained in different splits.", "We therefore remove from the training data the clue-answer pairs which are found in the test or validation data.", "This ensures that the model can not trivially recall the answers to the overlapping clues while predicting for the test and validation splits.", "This produces the total of 578 k clue-answer pairs, with 433 k/ 72 k/ 72 k examples in the train/validation/test splits, respectively.", "Since certain answers consist of phrases and multiple words that are merged into a single string (such as \"VERY-FAST\"), we further postprocess the answers by splitting the strings into individual words using a dictionary.", "Out of all the possible word splits of a given string we pick the one that has the smallest number of words.", "If there are multiple solutions, we select the split with the highest average word frequency.", "Examples of a variety of clues found in this dataset are given in the following section.", "To provide more insight into the diversity of the clue types and the complexity of the task, we categorize", "Factual.", "Clues that encode encyclopedic knowledge and typically can be answered using resources such as Wikipedia (e.g. Clue: South Carolina State tree, Answer: PALMETTO ).", "This type of clue is the closest to the questions found in open-domain QA datasets.", "Note that the facts required to solve some of the clues implicitly depend on the date when a given crossword was released.", "For instance, the clue \" President of Brazil \" has a time-dependent answer.", "Historical.", "Clues that require the knowledge of historical facts and temporal relations between events.", "(e.g. Clue: Automobile pioneer, Answer: BENZ ).", "Word meaning.", "Clues that exploit general vocabulary knowledge and can typically be resolved using a dictionary.", "(e.g. Clue: Opposing sides, Answer: FOES ).", "Synonyms/Antonyms.", "Clues that focus on paraphrasing and synonymy relations (e.g. Clue: Prognosticators, Answer: SEERS ).", "In most cases, such clues can be solved with a thesaurus.", "Fill in the blank.", "Clues formulated as a cloze task (e.g. Clue: Magna Cum __, Answer: LAUDE ).", "Fill-in-the-blank clues are expected to be easy to solve for the models trained with the masked language modeling objective (Devlin et al., 2019).", "Abbreviations.", "Clues answered with acronyms (e.g. Clue: (Abbr.) Old Communist state, Answer: USSR ).", "Abbreviation clues are marked with \" Abbr. \" label.", "Wordplay.", "Clues that rely on wordplay, anagrams, or puns / pronunciation similarities (e.g. Clue: Consider an imaginary animal, Answer: BEAR IN MIND ).", "In a lot of cases, wordplay clues involve jokes and exploit different possible meanings and contexts for the same word.", "Cross-lingual.", "Clues that either explicitly use words from other languages, or imply a specific language-dependent form of the answer.", "(e.g. Clue: Sunrise direccin, Answer: ESTE ).", "Clues dependent on other clues.", "Clues the answer to which can be provided only after a different clue has been solved (e.g. Clue: Last words of 45 Across ).", "Although rare, this category of clues suggests that the entire puzzle has to be solved in certain order.", "To understand the distribution of these classes, we randomly selected 1000 examples from the test split of the data and manually annotated them.", "Figure 2 illustrates the class distribution of the annotated examples, showing that the Factual class covers a little over a third of all examples.", "The synonyms/antonyms, word meaning and wordplay classes taken together comprise 50% of the data.", "The remaining 20% are taken by fill-in-the-blank and historical clues, as well as the low-frequency classes (comprising less than or around 1%), which include abbreviation, dependent, prefix/suffix and cross-lingual clues.", "We illustrate each one of these classes in the Figure 1.", "metrics we introduce for the two subtasks.", "Clue-Answer Task.", "For the clue-answer task, we use the following metrics: Exact Match (EM) .", "Model output matches the ground-truth answer exactly.", "Contains (In) .", "Model output contains the ground-truth answer as a contiguous substring Since the ground-truth answers do not contain diacritics, accents, punctuation and whitespace characters, we also consider normalized versions of the Train Validation Test Clue-Answer dataset # clues 4,33,033 72,303 72,939 avg/median clue length (words) 4.0/3 4.2/4 4.2/4 avg/median ans.", "above metrics, in which these are stripped from the model output prior to computing the metric.", "We will refer to them as EM norm and In norm , We report these metrics for topk predictions, where k varies from 1 to 20.", "Crossword Puzzle Task.", "To evaluate the performance of the crossword puzzle solver, we propose to compute the following two metrics: Character Accuracy (Acc char ) .", "Percentage of characters in the predicted crossword solution that match the ground-truth solution.", "Word Accuracy (Acc word ) .", "Percentage of words in the predicted crossword solution that match the ground-truth solution.", "clues, it may only be possible to produce a partial solution to a puzzle.", "The crossword puzzle solver will fail to produce a solution when the answer candidate list for a clue does not contain the correct answer.", "To prevent this from happening, the character cells which belong to that clue's answer must be removed from the puzzle grid, unless the characters are shared by other clues.", "We propose two additional metrics to track what percentage of the puzzle needs to be redacted to produce a partial solution: Word Removal (Rem word ) .", "% of words that need to be removed from the puzzle to produce a partial solution.", "Character Removal (Rem word ) .", "% of characters that need to be removed from the puzzle grid to produce a partial solution.", "The motivation for introducing the removal metrics is to indicate the amount of constraint relaxation.", "For instance, a completely relaxed puzzle grid, where many character cells have been removed, such that the grid has no word intersection constraints left, could be considered \"solved\" by selecting any candidates from the answer candidate lists at random.", "However, this solution will mostly be incorrect when compared to the gold puzzle solution.", "As the word and character removal percentage increases, the potential for correctly solving the remaining puzzle is expected to decrease, since the under-constrained answer cells in the grid can be incorrectly filled by other candidates (which may not be the right answers).", "The removal metrics are thus complementary to word and character level accuracy.", "Our baseline approach is a two-step solution that treats each subtask separately.", "We first develop a set of baseline systems that solve the question answering problem, ignoring the grid-imposed answer interdependencies.", "We use seq-to-seq and retrieval-augmented Transformer baselines for this subtask.", "We feed generated answer candidates to a crossword solver in order to complete the puzzle and evaluate the produced puzzle solutions.", "Sequence-to-sequence baselines.", "We fine-tune two sequence-to-sequence models on the clue-answer training data.", "We select two widely known models, BART (Lewis et al., 2019) and T5 (Raffel et al., 2019), which achieved state-of-the-art results on a set of generative tasks, including specifically abstractive QA involving commonsense and multihop reasoning (Fan et al., 2019; Khashabi et al., 2018; Zhang et al., 2018).", "We train both models for 8 epochs with the learning rate of 5 10 5 , and a batch size of 60.", "3 Retrieval-augmented generation.", "T5 and BART store world knowledge implicitly in their parameters and are known to hallucinate facts (Maynez et al., 2020).", "Recently, a new method called retrieval-augmented generation (RAG) (Lewis et al., 2020) has been introduced for open-domain question answering.", "This method involves a Transformer encoder to encode the question and a decoder to generate the answer (Vaswani et al., 2017), but the encoded query is supplemented with relevant excerpts retrieved from an external textual corpus via Maximum Inner Product Search (MIPS); the entire neural network is trained end-to-end.", "Due to a built-in retrieval mechanism for performing a soft search over a large collection of external documents, such systems are capable of producing stronger results on knowledge-intensive open-domain question answering tasks than the vanilla sequence-to-sequence generative models and are more factually accurate (Shuster et al., 2021).", "Motivated by this, we train RAG models to extract knowledge from two separate external sources of knowledge:", "(a) RAG-wiki uses a full Wikipedia dump from December 2018.", "Following existing work Lewis et al. (2020); Karpukhin et al. (2020); Lee et al. (2019), each Wikipedia article is split into disjoint 100-word chunks, resulting in a total of 21M passages.", "(b) RAG-dict uses several English dictionaries and thesauri sources, including Wiktionary 4 , Merriam-Webster 5 , and Google's English dictionary by Oxford Languages.", "6 For both of these models, we use the retriever em-beddings pretrained on the Natural Questions corpus Kwiatkowski et al. (2019b) in order to prime the MIPS retrieval to return meaningful entries (Lewis et al., 2020).", "We train with a batch size 3 We use BART-large with approximately 406M parameters and T5-base model with approximately 220M parameters, respectively.", "of 8, label smoothing set to 0.1, dropout probability of 0.1, weight decay rate of 0.001, and a learning rate of 3 10 5 for 8 epochs.", "A crossword puzzle can be cast as an instance of a satisfiability problem, and its solution represents a particular character assignment so that all the constraints of the puzzle are met.", "Under such formulation, three main conditions have to be satisfied: (1) the answer candidates for every clue must come from a set of words that answer the question, (2) they must have the exact length specified by the corresponding grid entry, and (3) for every pair of words that intersect in the puzzle grid, acceptable word assignments must have the same character at the intersection offset.", "This class of problems can be modelled through Satisfiability Modulo Theories (SMT).", "SMT is a generalization of Boolean Satisfiability problem (SAT) in which some of the binary variables are replaced by first-order logic predicates over a set of non-binary variables.", "In the case of crosswords, a variable represents one character in the crossword grid which can be assigned a single letter of the English alphabet and 0 through 9 digit values.", "This is further subject to the constraints mentioned above which can be formulated with the equality operator and Boolean logical operators: AND and OR .", "For example, a word slot of length 3 where the candidate answers are \"ESC\", \"DEL\" or \"CMD\" can be formalised as: { v 1 = E AND v 2 = S AND v 3 = C } OR { v 1 = D AND v 2 = E AND v 3 = L } OR { v 1 = C AND v 2 = M AND v 3 = D } To solve the entire crossword puzzle, we use the formulation that treats this as an SMT problem.", "modify an open source implementation 7 of this formulation based on Z3 SMT solver (de Moura and Bjrner, 2008).", "The answer length and intersection constraints are imposed on the variable assignment, as specified by the input crossword grid.", "We take the topk predictions from our baseline models and for each prediction, select all possible substrings of required length as answer candidates.", "For simplicity, we exclude from our consideration all the crosswords with a single cell containing more than one English letter in it.", "Our current baseline constraint satisfaction solver is limited in that it simply returns \"not-satisfied\" ( nosat ) for a puzzle where no valid solution exists, that is, when all the hard constraints of the puzzle are not met by the inputs.", "Since the candidate lists for certain clues might not meet all the constraints, this results in a nosat solution for almost all crossword puzzles, and we are not able to extract partial solutions.", "To bypass this issue and produce partial solutions, we pre-filter each clue with an oracle that only allows those clues into the SMT solver for which the actual answer is available as one of the candidates.", "In Table 2 we report the Top-1, Top-10 and Top-20 match accuracies for the four evaluation metrics defined in Section 3.3.", "Our results (Table 2) suggest a high difficulty of the clue-answer dataset, with the best achieved accuracy metric staying under 30% for the top-1 model prediction.", "Even top-20 predictions have an almost 40% chance of not containing the ground-truth answer anywhere within the generated strings.", "Generative Transformer models such as T5-base and BART-large perform poorly on the clue-answer task, however, the model accuracy across most 7 https://github.com/pncnmnp/ Crossword-Solver 2654 Model Solving Accuracy Puzzle Removed Acc word Acc char Rem word Rem char BART 16.6 28.4 55.6 43.4 RAG wiki 23.8 37.8 40.3 26.3 RAG dict 22.1 35.9 40.8 26.8 Table 3: Performance of baseline systems on the Crossword Puzzle dataset.", "metrics almost doubles when switching from T5-base (with 220M parameters) to BART-large (with 400M parameter).", "Our strongest baseline, RAG-wiki and RAG-dict, achieve 50.6 and 50.0 exact-match accuracies on the clue-answer dataset, respectively.", "The In norm score, which looks at whether any substrings in the generated answer match the ground truth and which can be seen an upper bound on the model's ability to solve the puzzle is slightly higher, at 56.7 for RAG-wiki and 56.2 for RAG-dict.", "Not surprisingly, these results show that the additional step of retrieving Wikipedia or dictionary entries increases the accuracy considerably compared to the fine-tuned sequence-to-sequence models such as BART which store this information in its parameters.", "The normalized metrics which remove diacritics, punctuation and whitespace bring the accuracy up by 2-6%, depending on the model.", "We examined the top-20 exact-match predictions generated by RAG-wiki and RAG-dict and find that both models are in agreement in terms of answer matches for around 85% of the test set.", "In other words, both models either correctly predict the ground truth answer or both fail to do so.", "The baseline performance on the entire crossword puzzle dataset shows there is significant room for improvement of the existing architectures (see Table 3).", "Our best model, RAG-wiki, correctly fills in the answers for only 26% (on average) of the total number of puzzle clues, despite having a much higher performance on the clue-answer task, i.e. measured independently from the crossword grid (Table 2).", "This is explained by the fact that the clues with no ground-truth answer present among the candidates have to be removed from the puzzles in order for the solver to converge, which in turn relaxes the interdependency constraints too much, so that a filled answer may be selected from the set of candidates almost at random.", "Despite that, the baseline solver is able to solve over a quarter of each the puzzle on average.", "Evaluation on the annotated subset of the data reveals that some clue types present significantly higher levels of difficulty than others (see Table 4).", "In particular, all of our baseline systems struggle with the clues requiring reasoning in the context of historical knowledge.", "As expected, all of the models demonstrate much stronger performance on the factual and word-meaning clue types, since the relevant answer candidates are likely to be found in the Wikipedia data used for pre-training.", "We observe the biggest differences between BART and RAG performance for the abbreviation and the prefix-suffix categories.", "The document retrieval step in RAG allows for more efficient matching of supporting documents, leading to generation of more relevant answer candidates.", "For instance, the clue Warehouse abbr. results in pkg and bldg candidates among RAG predictions, whereas BART generates abstract and largely irrelevant strings.", "Our manual inspection of model predictions suggest that both BART and RAG correctly infer the grammatical form of the answer from the formulation of the clue.", "For example, the clue Stitched produces the candidate answers Sewn and Made , and the clue Word repeated after Que triggers mostly Spanish and French generations (e.g. Avec or Sera ).", "As previously stated RAG-wiki and RAG-dict largely agree with each other with respect to the ground truth answers.", "We qualitatively assessed instances where either RAG-wiki or RAG-dict predict the answer correctly in Appendix A. 7 Discussion and Future Work The presented task is challenging to approach in an end-to-end model fashion.", "There are several reasons for this, which we discuss below.", "Character-level outputs.", "Commonly used Transformer decoders do not produce character-level outputs and produce BPE and wordpieces instead, which creates a problem for a potential end-to-end neural crossword solver.", "One possible solution can be the modification of the loss term, designed with character-based output logits instead of BPE since the crossword grid constraints are at a single cell(i.e. character-) level.", "There is 2655 Model Fact.", "some work done in the character-level output transformer encoders such as Ma et al. (2020).", "However, to our best knowledge there is no major generative Transformer architecture which supports character-level outputs yet, we intend to explore this avenue further in future work to develop an end-to-end neural crossword solver.", "SMT solver constraints.", "As mentioned earlier, our current baseline solver does not allow partial solutions, and we rely on pre-filtering using the oracle from the ground-truth answers.", "Although this strategy is flawed for the obvious use of the oracle, the alternatives are currently either computationally intractable or too lossy.", "One such strategy is to remove k clues at a time, starting with k = 1 and progressively increasing the number of clues removed until the remaining relaxed puzzle can be solved which has the complexity of O( 2 n ), where n is the total number of clues in the puzzle.", "Another approach we tried was to relax certain constraints of the puzzle grid, maximally satisfying as many constraints as possible, which is formally known as the maximal satisfaction problem (MAX-SAT).", "This is a NP-hard problem for which it is hard to find approximate solutions (Papadimitriou, 1994).", "Our initial foray into such approximate solvers (Previti and Marques-Silva, 2013; Liffiton and Malik, 2013) produced severely under-constrained puzzles with garbage character entries.", "Further work needs to be done to extend this solver to handle partial solutions elegantly without the need for an oracle, this could be addressed with probabilistic and weighted constraint satisfaction solvers, in line with the work by Littman et al. (2002); Keim et al. (1999) and Ginsberg (2011), but without the dependency on the past crossword clues.", "We present a new challenging task of solving crossword puzzles and present the New York Times Crosswords Dataset, which can be approached at a QA-like level of individual clue-answer pairs, or at the level of an entire puzzle, with imposed answer", "answer interdependency constraints.", "This new benchmark contains a broad range of clue types that require diverse reasoning components.", "We carry out a set of baseline experiments that indicate the overall difficulty of this task for the current systems, including retrieval-augmented SOTA models for open-domain question answering.", "We also discuss the technical challenges in building a crossword solver and obtaining partial solutions as well as in the design of end-to-end systems for this task.", "We hope that the NYT Crosswords task would define a new high bar for the AI systems.", "The New York Times daily crossword puzzles are a copyright of the New York Times.", "We have obtained preliminary approval from the New York Times to release this data under a non-commercial and research use license, and are in the process of finalizing the exact licensing terms and distribution channels with the NYT legal department.", "We would like to thank the anonymous reviewers for their careful and insightful review of our manuscript and their feedback.", "We would like to thank Parth Parikh for the permission to modify and reuse parts of their crossword solver 7 .", "We are grateful to New York Times staff for their support of this project.", "This project is funded in part by an NSF CAREER award to Anna Rumshisky (IIS-1652742)." ]
[ "abstain", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "other", "other", "other", "other" ]
[ "AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs.", "Sequence-to-sequence models can be used to this end by converting the AMR graphs to strings.", "Approaching the problem while working directly with graphs requires the use of graph-to-sequence models that encode the AMR graph into a vector representation.", "Such encoding has been shown to be beneficial in the past, and unlike sequential encoding, it allows us to explicitly capture reentrant structures in the AMR graphs.", "We investigate the extent to which reentrancies (nodes with multiple parents) have an impact on AMR-to-text generation by comparing graph encoders to tree encoders, where reentrancies are not preserved.", "We show that improvements in the treatment of reentrancies and long-range dependencies contribute to higher overall scores for graph encoders.", "Our best model achieves 24.40 BLEU on LDC2015E86, outperforming the state of the art by 1.1 points and 24.54 BLEU on LDC2017T10, outperforming the state of the art by 1.24 points.", "Abstract Meaning Representation (AMR; Ba-narescu et al. 2013) is a semantic graph representation that abstracts away from the syntactic realization of a sentence, where nodes in the graph represent concepts and edges represent semantic relations between them.", "AMRs are graphs, rather than trees, because co-references and control structures result in nodes with multiple parents, called reentrancies.", "For instance, the AMR of Figure", "1(a) contains a reentrancy between finger and he , caused by the possessive pronoun his .", "AMR-to-text generation is the task of automatically generating natural language from AMR", "Attentive encoder/decoder architectures, commonly used for Neural Machine Translation (NMT), have been explored for this task (Kon-stas et al., 2017; Song et al., 2018; Beck et al., 2018).", "In order to use sequence-to-sequence models, Konstas et al. (2017) reduce the AMR graphs to sequences, while Song et al. (2018) and Beck et al. (2018) directly encode them as graphs.", "Graph encoding allows the model to explicitly encode reentrant structures present in the AMR graphs.", "While central to AMR, reentrancies are often hard to treat both in parsing and in generation.", "Previous work either removed them from the graphs, hence obtaining sequential (Konstas et al., 2017) or tree-structured (Liu et al., 2015; Takase et al., 2016) data, while other work maintained them but did not analyze their impact on performance (e.g., Song et al., 2018; Beck et al., 2018).", "Damonte et al. (2017) showed that state-of-the-art parsers do not perform well in predicting reentrant structures, while van Noord and Bos (2017) compared different preand post-processing techniques to improve the performance of sequence-to-sequence parsers with respect to reentrancies.", "It is not yet clear whether explicit encoding of reentrancies is beneficial for generation.", "In this paper, we compare three types of encoders for AMR: 1) sequential encoders, which reduce AMR graphs to sequences; 2) tree encoders, which ignore reentrancies; and 3) graph encoders.", "We pay particular attention to two phenomena: reentrancies, which mark co-reference and control structures, and long-range dependencies in the AMR graphs, which are expected to benefit from structural encoding.", "The contributions of the paper are two-fold: We present structural encoders for the en-coder/decoder framework and show the benefits of graph encoders not only compared to sequential encoders but also compared to tree encoders, which have not been studied so far for AMR-to-text generation.", "We show that better treatment of reentrancies and long-range dependencies contributes to improvements in the graph encoders.", "Our best model, based on a graph encoder, achieves state-of-the-art results for both the LDC2015E86 dataset (24.40 on BLEU and 23.79 on Meteor) and the LDC2017T10 dataset (24.54 on BLEU and 24.07 on Meteor).", "Graph-structured AMRs AMRs are normally represented as rooted and directed graphs:", "G 0 = ( V 0 , E 0 , L ) , V 0 = { v 1 , v 2 , . . . , v n } , root V 0 ,", "Each edge e E 0 is a triple: e = ( i, label, j ) , where i V 0 is the parent node, label L is the edge label and j V 0 is the child node.", "In order to obtain unlabeled edges, thus decreasing the total number of parameters required by the models, we replace each labeled edge e = ( i, label, j ) with two unlabeled edges: e 1 = ( i, label ) , e 2 = ( label, j ) : G = ( V, E ) , V = V 0 L = { v 1 , . . . , v n , (cid:96) 1 , . . . , (cid:96) n (cid:48) } , E ( V 0 L ) ( L V 0 ) .", "Each unlabeled edge e E is a pair: e = ( i, j ) , where one of the following holds: 1. i V 0 and j L ; 2. i L and j V 0 .", "For instance, the edge between eat-01 and he with label :arg0 of Figure", "1(a) is replaced by two edges in Figure", "1(d): an edge between eat-01 and :arg0 and another one between :arg0 and he .", "The process, also used in Beck et al. (2018), tranforms the input graph into its equivalent Levi graph (Levi, 1942).", "Tree-structured AMRs In order to obtain tree structures, it is necessary to discard the reentrancies from the AMR graphs.", "Similarly to Takase et al. (2016), we replace nodes with n > 1 incoming edges with n identically labeled nodes, each with a single incoming edge.", "Sequential AMRs Following Konstas et al. (2017), the input sequence is a linearized and anonymized AMR graph.", "Linearization is used to convert the graph into a sequence: x = x 1 , . . . , x N , x i V. The depth-first traversal of the graph defines the indexing between nodes and tokens in the sequence.", "For instance, the root node is x 1 , its leftmost child is x 2 and so on.", "Nodes with multiple parents are visited more than once.", "At each visit, their labels are repeated in the sequence, effectively losing reentrancy information, as shown in Figure", "1(b).", "Anonymization removes names and rare words with coarse categories to reduce data sparsity.", "An alternative to anonymization is to employ a copy mechanism (Gulcehre et al., 2016), where the models learn to copy rare words from the input itself.", "In this paper, we follow the anonymization approach.", "In this section, we review the encoders adopted as building blocks for our tree and graph encoders.", "We reimplement the encoder of Konstas et al. (2017), where the sequential linearization is the input to a bidirectional LSTM (BiLSTM; Graves et al. 2013) network.", "The hidden state of the BiLSTM at step i is used as a context-aware word representation of the i -th token in the sequence: e 1: N = BiLSTM( x 1: N ) , where e i R d , d is the size of the output embeddings.", "Tree-Structured Long Short-Term Memory Networks (TreeLSTM; Tai et al. 2015) have been introduced primarily as a way to encode the hierarchical structure of syntactic trees (Tai et al., 2015), but they have also been applied to AMR for the task of headline generation (Takase et al., 2016).", "TreeLSTMs assume tree-structured input, so AMR graphs must be preprocessed to respect this constraint: reentrancies, which play an essential role in AMR, must be removed, thereby transforming the graphs into trees.", "We use the Child-Sum variant introduced by Tai et al. (2015), which processes the tree in a bottom-up pass.", "When visiting a node, the hidden states of its children are summed up in a single vector which is then passed into recurrent gates.", "In order to use information from both incoming and outgoing edges (parents and children), we employ bidirectional TreeLSTMs (Eriguchi et al., 2016), where the bottom-up pass is followed by a top-down pass.", "The top-down state of the root node is obtained by feeding the bottom-up state of the root node through a feed-forward layer: h root = tanh( W r h root + b ) , where h i is the hidden state of node x i V for the bottom-up pass and h i is the hidden state of node x i for the top-down pass.", "The bottom up states for all other nodes are computed with an LSTM, with the cell state given by their parent nodes: h i = LSTM( h p ( i ) , h i ) , where p ( i ) is the parent of node x i in the tree.", "The final hidden states are obtained by concatenating the states from the bottom-up pass and the top-down pass: h i = (cid:2) h i ; h i (cid:3) .", "The hidden state of the root node is usually used as a representation for the entire tree.", "In order to use attention over all nodes, as in traditional NMT (Bahdanau et al., 2015), we can however build node embeddings by extracting the hidden states of each node in the tree: e 1: N = h 1: N , where e i R d , d is the size of the output embeddings.", "The encoder is related to the TreeLSTM encoder of Takase et al. (2016), which however encodes labeled trees and does not use a top-down pass.", "Graph Convolutional Network (GCN; Duvenaud et al. 2015; Kipf and Welling 2016) is a neural network architecture that learns embeddings of nodes in a graph by looking at its nearby nodes.", "In Natural Language Processing, GCNs have been used for Semantic Role Labeling (Marcheggiani and Titov, 2017), NMT (Bastings et al., 2017), Named Entity Recognition (Cetoli et al., 2017) and text generation (Marcheggiani and Perez-Beltrachini, 2018).", "A graph-to-sequence neural network was first introduced by Xu et al. (2018).", "The authors review the similarities between their approach, GCN and another approach, based on GRUs (Li et al., 2015).", "The latter recently inspired a graph-to-sequence architecture for AMR-to-text generation (Beck et al., 2018).", "Simultaneously, Song et al. (2018) proposed a graph encoder based on LSTMs.", "The architectures of Song et al. (2018) and Beck et al. (2018) are both based on the same core computation of a GCN, which sums over the embeddings of the immediate neighborhood of each node: h ( k +1) i = (cid:32) (cid:88) j N ( i ) W ( k ) ( j,i ) h ( k ) j + b ( k ) (cid:33) , where h ( k ) i is the embeddings of node x i V at layer k , is a non-linear activation function, N ( i ) is the set of the immediate neighbors of x i , W ( k ) ( j,i ) R m m and b ( k ) R m , with m being the size of the embeddings.", "It is possible to use recurrent networks to model the update of the node embeddings.", "Specifically, Beck et al. (2018) uses a GRU layer where the gates are modeled as GCN layers.", "Song et al. (2018) did not use the activation function and perform an LSTM update instead.", "The systems of Song et al. (2018) and Beck et al. (2018) further differ in design and implementation decisions such as in the use of edge label and edge directionality.", "Throughout the rest of the paper, we follow the traditional, non-recurrent, implementation of GCN also adopted in other NLP tasks (Marcheggiani and Titov, 2017; Bastings et al., 2017; Cetoli et al., 2017).", "In our experiments, the node embeddings are computed as follows: h ( k +1) i = (cid:32) (cid:88) j N ( i ) W ( k ) dir( j,i ) h ( k ) j + b ( k ) (cid:33) , (1) where dir( j, i ) indicates the direction of the edge between x j and x i (i.e., outgoing or incoming edge).", "The hidden vectors from the last layer of the GCN network are finally used to represent each node in the graph: e 1: N = h ( K ) 1 , . . . , h ( K ) N , where K is the number of GCN layers used, e i R d , d is the size of the output embeddings.", "To regularize the models we apply dropout (Sri-vastava et al., 2014) as well as edge dropout (Marcheggiani and Titov, 2017).", "We also include highway connections (Srivastava et al., 2015) between GCN layers.", "While GCN can naturally be used to encode graphs, they can also be applied to trees by removing reentrancies from the input graphs.", "In the experiments of Section 5, we explore GCN-based models both as graph encoders (reentrancies are maintained) as well as tree encoders (reentrancies are ignored).", "We aimed at stacking the explicit source of structural information provided by TreeLSTMs and GCNs with the sequential information which BiLSTMs extract well.", "This was shown to be effective for other tasks with both TreeLSTMs (Eriguchi et al., 2016; Chen et al., 2017) and GCNs (Marcheggiani and Titov, 2017; Cetoli et al., 2017; Bastings et al., 2017).", "In previous work, the structural encoders (tree or graph) were used on top of the BiLSTM network: first, the input is passed through the sequential encoder, the output of which is then fed into the structural encoder.", "While we experiment with this approach, we also propose an alternative solution where the BiLSTM network is used on top of the structural encoder: the input embeddings are refined by exploiting the explicit structural information given by the graph.", "The refined embeddings are then fed into the BiLSTM networks.", "See Figure 2 for a graphical representation of the two approaches.", "In our experiments, we found this approach to be more effective.", "Compared to models that interleave structural and recurrent components such as the systems of Song et al. (2018) and Beck et al. (2018), stacking the components allows us to test for their contributions more easily.", "In this setup, BiLSTMs are used as in Section 3.1 to encode the linearized and anonymized AMR.", "The context provided by the BiLSTM is a sequential one.", "We then apply either GCN or TreeLSTM on the output of the BiLSTM, by initializing the GCN or TreeLSTM embeddings with the BiLSTM hidden states.", "We call these models SEQGCN and SEQTREELSTM.", "We also propose a different approach for integrating graph information into the encoder, by swapping the order of the BiLSTM and the structural encoder: we aim at using the structured information provided by the AMR graph as a way to re-fine the original word representations.", "We first apply the structural encoder to the input graphs.", "The GCN or TreeLSTM representations are then fed into the BiLSTM.", "We call these models GCNSEQ and TREELSTMSEQ .", "The motivation behind this approach is that we know that BiLSTMs, given appropriate input embeddings, are very effective at encoding the input sequences.", "In order to exploit their strength, we do not amend their output but rather provide them with better input embeddings to start with, by explicitly taking the graph relations into account.", "We use both BLEU (Papineni et al., 2002) and Meteor (Banerjee and Lavie, 2005) as evaluation metrics.", "1 We report results on the AMR dataset LDC2015E86 and LDC2017T10.", "All systems are implemented in PyTorch (Paszke et al., 2017) using the framework OpenNMT-py (Klein et al., 2017).", "Hyperparameters of each model were tuned on the development set of LDC2015E86.", "For the GCN components, we use two layers, ReLU activations, and tanh highway layers.", "We use single layer LSTMs.", "We train with SGD with the initial learning rate set to 1 and decay to 0.8.", "Batch size is set to 100.", "2 We first evaluate the overall performance of the models, after which we focus on two phenomena that we expect to benefit most from structural 1 We used the evaluation script available at https:// github.com/sinantie/NeuralAmr .", "encoders: reentrancies and long-range dependencies.", "Table 1 shows the comparison on the development split of the LDC2015E86 dataset between sequential, tree and graph encoders.", "The sequential encoder (SEQ ) is a re-implementation of Konstas et al. (2017).", "We test both approaches of stacking structural and sequential components: structure on top of sequence (SEQTREELSTM and SEQGCN), and sequence on top of structure (TREELSTMSEQ and GCNSEQ ).", "To inspect the effect of the sequential component, we run ablation tests by removing the RNNs altogether (TREELSTM and GCN).", "GCN-based models are used both as tree encoders (reentran-cies are removed) and graph encoders (reentran-cies are maintained).", "For both TreeLSTM-based and GCN-based models, our proposed approach of applying the structural encoder before the RNN achieves better scores.", "This is especially true for GCN-based models, for which we also note a drastic drop in performance when the RNN is removed, highlighting the importance of a sequential component.", "On the other hand, RNN layers seem to have less impact on TreeLSTM-based models.", "This outcome is not unexpected, as TreeLSTMs already use LSTM gates in their computation.", "The results show a clear advantage of tree and graph encoders over the sequential encoder.", "The best performing model is GCNSEQ , both as a tree and as a graph encoder, with the latter obtaining the highest results.", "Table 2 shows the comparison between our best sequential (SEQ ), tree (GCNSEQ without reentrancies, henceforth called TREE ) and graph en-Model BLEU Meteor LDC2015E86 SEQ 21.43 21.53 TREE 23.93 23.32 GRAPH 24.40 23.60 Konstas et al. (2017) 22.00 Song et al. (2018) 23.30 LDC2017T10 SEQ 22.19 22.68 TREE 24.06 23.62 GRAPH 24.54 24.07 Beck et al. (2018) 23.30 Table 2: Scores on the test split of LDC2015E86 and LDC2017T10.", "coders (GCNSEQ with reentrancies, henceforth called GRAPH ) on the test set of LDC2015E86 and LDC2017T10.", "We also include state-of-the-art results reported on these datasets for sequential encoding (Konstas et al., 2017) and graph encoding (Song et al., 2018; Beck et al., 2018).", "3 In order to mitigate the effects of random seeds, we train five models with different random seeds and report the results of the median model, according to their BLEU score on the development set (Beck et al., 2018).", "We achieve state-of-the-art results with both tree and graph encoders, demonstrating the efficacy of our GCNSeq approach.", "The graph encoder outperforms the other systems and previous work on both datasets.", "These results demonstrate the benefit of structural encoders over purely sequential ones as well as the advantage of explicitly including reentrancies.", "The differences between our graph encoder and that of Song et al. (2018) and Beck et al. (2018) were discussed in Section 3.3.", "Overall scores show an advantage of graph encoder over tree and sequential encoders, but they do not shed light into how this is achieved.", "Because graph encoders are the only ones to model reentrancies explicitly, we expect them to deal better with these structures.", "It is, however, possible that the other models are capable of handling these structures implicitly.", "Moreover, the dataset contains a large number of examples that do not involve any reentrancies, as shown in Table 3, so that the overall scores may not be representative of the ability of models to capture reentrancies.", "It is expected that the benefit of the graph models will be more evident for those examples containing more reentrancies.", "To test this hypothesis, we evaluate the various scenarios as a function of the number of reentrancies in each example, using the Meteor score as a metric.", "4 Table 4 shows that the gap between the graph encoder and the other encoders is widest for examples with more than six reentrancies.", "The Meteor score of the graph encoder for these cases is 3.1% higher than the one for the sequential encoder and 2.3% higher than the score achieved by the tree encoder, demonstrating that explicitly encoding reentrancies is more beneficial than the overall scores suggest.", "Interestingly, it can also be observed that the graph model outperforms the tree model also for examples with no reentrancies, where tree and graph structures are identical.", "This suggests that preserving reentrancies in the training data has other beneficial effects.", "In Section 5.2 we explore one: better handling of long-range dependencies.", "4 For this analysis we use Meteor instead of BLEU because it is a sentence-level metric, unlike BLEU, which is a corpus-level metric.", "5.1.1 Manual Inspection In order to further explore how the graph model handles reentrancies differently from the other models, we performed a manual inspection of the models' output.", "We selected examples containing reentrancies, where the graph model performs better than the other models.", "These are shown in Table 5. In Example (1), we note that the graph model is the only one that correctly predicts the phrase he finds out .", "The wrong verb tense is due to the lack of tense information in AMR graphs.", "In the sequential model, the pronoun is chosen correctly, but the wrong verb is predicted, while in the tree model the pronoun is missing.", "In Example (2), only the graph model correctly generates the phrase you tell them , while none of the models use people as the subject of the predicate can .", "In Example (3), both the graph and the sequential models deal well with the control structure caused by the recommend predicate.", "The sequential model, however, overgenerates a wh-clause.", "Finally, in Example (4) the tree and graph models deal correctly with the possessive pronoun to generate the phrase tell your ex , while the sequential model does not.", "Overall, we note that the graph model produces a more accurate output than sequential and tree models by generating the correct pronouns and mentions when control verbs and co-references are involved.", "For a quantitative analysis of how the different models handle pronouns, we use a method to inspect NMT output for specific linguistic analysis based on contrastive pairs (Sennrich, 2017).", "Given a reference output sentence, a contrastive sentence is generated by introducing a mistake related to the phenomenon we are interested in evaluating.", "The probability that the model assigns to the reference sentence is then compared to that of the contrastive sentence.", "The accuracy of a model is determined by the percentage of examples in which the reference sentence has a higher probability than the contrastive sentence.", "We produce contrastive examples by running CoreNLP (Manning et al., 2014) to identify co-references, which are the primary cause of reentrancies, and introducing a mistake.", "When an expression has multiple mentions, the antecedent is repeated in the linearized AMR.", "For instance, the linearization of Figure", "1(b) contains the token he twice, which instead appears only once in the sentence.", "This repetition may result in generating the token he twice, rather than using a pronoun to refer back to it.", "To investigate this possible mistake, we replace one of the mentions with the antecedent (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with John fingers , which is ungrammatical and as such should be less likely).", "An alternative hypothesis is that even when the generation system correctly decides to predict a pronoun, it selects the wrong one.", "To test for this, we produce contrastive examples where a pronoun is replaced by either a different type of pronoun (e.g., John ate the pizza with his fingers is replaced with John ate the pizza with him fingers ) or by the same type of pronoun but for a different number ( John ate the pizza with their fingers ) or different gender ( John ate the pizza with her fingers ).", "Note from Figure 1 that the graph-structured AMR is the one that more directly captures the relation between finger and he , and as such it is expected to deal better with this type of mistakes.", "From the test split of LDC2017T10, we generated 251 contrastive examples due to antecedent replacements, 912 due to pronoun type replacements, 1840 due to number replacements and 95 due to gender replacements.", "5 The results are shown in Table 6. The sequential encoder performs surprisingly well at this task, with better or on par performance with respect to the tree encoder.", "The graph encoder outperforms the sequential encoder only for pronoun number and gender replacements.", "Future work is required to more precisely analyze if the different models cope with pronomial mentions in significantly different ways.", "Other approaches to inspect phenomena of co-reference and control verbs can also be explored, for instance by devising specific training objectives (Linzen et al., 2016).", "When we encode a long sequence, interactions between items that appear distant from each other in the sequence are difficult to capture.", "The problem of long-range dependencies in natural language is well known for RNN architectures (Bengio et al., 1994).", "Indeed, the need to solve this problem motivated the introduction of LSTM models, which are known to model long-range dependencies better than traditional RNNs.", "Because the nodes in the graphs are not aligned with words in the sentence, AMR has no notion of distance between the nodes taking part in an edge.", "In order to define the length of an AMR edge, we resort to the AMR linearization discussed in Section 2. Given the linearization of the AMR x 1 , . . . , x N , as discussed in Section 2, and an edge between two nodes x i and x j , the length of the edge is defined as | j i | .", "For instance, in the AMR of Figure 1, the edge between eat-01 and :instrument is a dependency of length five, because of the distance between the two words in the linearization eat-01 :arg0 he :arg1 pizza :in-strument .", "We then compute the maximum dependency length for each AMR graph.", "To verify the hypothesis that long-range dependencies contribute to the improvements of graph models, we compare the models as a function of the maximum dependency length in each example.", "Longer dependencies are sometimes caused by reentrancies, as in the dependency between :part-of and he in Figure 1. To verify that the contribution in terms of longer dependencies is complementary to that of reentrancies, we exclude sentences with reentrancies from this analysis.", "Table 7 shows the statistics for this measure.", "Results are shown in Table 8.", "The graph encoder always outperforms both the sequential and the tree encoder.", "The gap with the sequential encoder increases for longer dependencies.", "This indicates that longer dependencies are an important factor in improving results for both tree and graph encoders, especially for the latter.", "We introduced models for AMR-to-text generation with the purpose of investigating the difference between sequential, tree and graph encoders.", "We showed that encoding reentrancies improves overall performance.", "We observed bigger benefits when the input AMR graphs have a larger number of reentrant structures and longer dependencies.", "Our best graph encoder, which consists of a GCN wired to a BiLSTM network, improves over the state of the art on all tested datasets.", "We inspected the differences between the models, especially in terms of co-references and control structures.", "Further exploration of graph encoders is left to future work, which may result crucial to improve performance further.", "The authors would like to thank the three anonymous reviewers and Adam Lopez, Ioannis Konstas, Diego Marcheggiani, Sorcha Gilroy, Sameer Bansal, Ida Szubert and Clara Vania for their help and comments.", "This research was supported by a grant from Bloomberg and by the H2020 project SUMMA, under grant agreement 688139." ]
[ "abstain", "abstain", "abstain", "result", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "result", "result", "abstain", "abstain", "other", "other" ]
[ "Local models for Entity Disambiguation (ED) have today become extremely powerful, in most part thanks to the advent of large pre-trained language models.", "However, despite their significant performance achievements, most of these approaches frame ED through classification formulations that have intrinsic limitations, both computationally and from a modeling perspective.", "In contrast with this trend, here we propose EXTEND, a novel local formulation for ED where we frame this task as a text extraction problem, and present two Transformer-based architectures that implement it.", "Based on experiments in and out of domain, and training over two different data regimes, we find our approach surpasses all its competitors in terms of both data efficiency and raw performance.", "EXTEND outperforms its alternatives by as few as 6 F 1 points on the more constrained of the two data regimes and, when moving to the other higher-resourced regime, sets a new state of the art on 4 out of 6 benchmarks under consideration, with average improvements of 0 .", "7 F 1 points overall and 1 .", "1 F 1 points out of domain.", "In addition, to gain better insights from our results, we also perform a fine-grained evaluation of our performances on different classes of label frequency, along with an ablation study of our architectural choices and an error analysis.", "We release our code and models for research purposes at https:// github.com/SapienzaNLP/extend .", "Being able to associate entity mentions in a given text with the correct entity they refer to is a crucial task in Natural Language Processing (NLP).", "Formally referred to as Entity Disambiguation (ED), this task entails, given a mention m occurring in a text c m , identifying the correct entity e out of a set of candidates e 1 , . . . , e n , coming from a reference knowledge base (KB).", "First introduced by Bunescu Equal contribution.", "and Pasca (2006), ED aims to identify the actors involved in human language and, as such, has shown potential in downstream applications like Question Answering (Yin et al., 2016), Information Extraction (Ji and Grishman, 2011; Guo et al., 2013), Text Generation (Puduppully et al., 2019) and Semantic Parsing (Bevilacqua et al., 2021; Procopio et al., 2021).", "Since the advent of Deep Learning within the NLP community, this task has mostly been framed as a multi-label classification problem (Shahbazi et al., 2019; Broscheit, 2019), especially leveraging the bi-encoder paradigm (Humeau et al., 2020; Wu et al., 2020).", "However, although simple and yet powerful enough to push scores past 90 % inKB Micro F 1 on standard benchmarks, this formulation suffers from a number of downsides.", "First, the actual disambiguation is only modeled through a dot product between independent mention and entity vectors, which may not capture complex mention-entity interactions.", "Second, from a computational perspective, entities are represented through high-dimensional vectors that are cached in a pre-computed index.", "Thus, classifying against a large KB has a significant memory cost that, in fact, scales linearly with respect to the number of entities.", "Besides this, adding a new entity also requires modifying the index itself.", "To address these issues, De Cao et al. (2021b) have recently proposed an auto-regressive formulation where, given mentions in their context, models are trained to generate, token-by-token, the correct entity identifiers.", "1 While this approach has addressed the aforementioned issues effectively, it requires an autoregressive decoding process, which has speed implications, and, what is more, does not let the model see its possible output choices, something 1 i.e. a textual description of the entity; in De Cao et al. (2021b), they use the titles of Wikipedia articles, since their reference KB is Wikipedia.", "that has shown significant potential in other semantic tasks (Barba et al., 2021a).", "In this work, we focus on these shortcomings and, inspired by this latter research trend, propose Extractive Entity Disambiguation (EXTEND), the first entity disam-biguator that frames ED as a text extraction task.", "Given as input a context c m in which a mention m occurs, along with a text representation for each of the possible candidates e 1 , . . . , e n , a model has to extract the span associated with the text representation of the entity that best suits m .", "We implement this formulation through 2 architectures:", "i) a Transformer system (Vaswani et al., 2017; Devlin et al., 2019) that features an almost identical modeling power to that of previous works, and", "ii) a variant that relaxes the computational requirements of our approach when using common Transformer-based architectures.", "Evaluating our two systems over standard benchmarks, we find our formulation to be particularly suited to ED.", "In particular, when restricting training resources to the AIDA-CoNLL dataset (Hoffart et al., 2011) only, EXTEND appears to be significantly more data-efficient than its alternatives, surpassing them by more than 6 inKB Micro F 1 points on average across in-domain and out-of-domain datasets.", "Furthermore, when pre-training on external ED data as in De Cao et al. (2021b), our system sets a new state of the art on 4 out of 6 benchmarks under consideration, with average improvements of 0 .", "7 overall and 1 .", "1 when moving out of domain.", "Finally, we also perform a thorough investigation of our system performances, providing insights and pinpointing the reasons behind our improvements via a fine-grained evaluation on different label-frequency classes.", "Our contributions are therefore as follows: We propose a new framing of ED as a text extraction task; We put forward two architectures that implement our formulation, whose average score across different benchmarks surpasses all previous works in both data regimes we consider; We perform a thorough analysis of our sys-tems' performances, evaluating their behavior over different label-frequency classes.", "Entity Disambiguation (ED) is the task of identifying, given a mention in context, the most suitable entity among a set of candidates stored in a knowledge base (KB).", "Generally the last step in an Entity Linking system (Broscheit, 2019), coming immediately after mention detection and candidate generation, this task has been the object of a vast and diverse literature, with approaches typically clustered into two groups, depending on how they model co-occurring mentions in the same document.", "On the one hand, global models strive to enforce a global coherence across the disambiguations within the same document, leveraging different techniques and heuristics to approximate this objective 2 (Hoffart et al., 2011; Moro et al., 2014; Yamada et al., 2016; Ganea and Hofmann, 2017; Le and Titov, 2018; Yang et al., 2018).", "On the other hand, local models disambiguate each mention independently of the others, conditioning the entity choice only on the mention and its context.", "Thanks to the advent of large pre-trained language models, this group has recently witnessed a significant improvement in performances, which are nowadays on par with, or even above, those achieved by state-of-the-art global systems (Shahbazi et al., 2019).", "These approaches usually frame ED as a multi-label classification problem (Broscheit, 2019) and a diverse set of formulations have been proposed.", "Among these, the bi-encoder paradigm (Bromley et al., 1994; Humeau et al., 2020) has been particularly successful (Gillick et al., 2019; Tedeschi et al., 2021; Botha et al., 2020): here, two encoders are trained to learn vector representations in a shared space for mentions in context and entities, respectively.", "Classification of a given mention is then performed by retrieving the entity whose representation is closest according to some metric (e.g. cosine similarity).", "Although remarkably powerful, these formulations present a number of disadvantages, such as their large memory footprint (each entity in the KB needs to be represented by a high-dimensional vector) and the fact that the actual disambiguation process is only expressed via a dot product of independently computed vectors, potentially neglecting mention-entity interactions.", "While a number of works (Logeswaran et al., 2019; Wu et al., 2020) attempt to address the latter issue via multi-stage 2 Approximation is necessary as the exact computation of coherence objectives is NP-hard (Le and Titov, 2018).", "approaches where a cross-encoder is stacked after an initial bi-encoder 3 or other retrieval functions, an interesting alternative direction that tackles both problems was recently presented by De Cao et al. (2021b): the authors frame ED as a generation problem and, leveraging an auto-regressive formulation, train a sequence-to-sequence model to generate the correct entity identifier for a given mention and its context.", "Nevertheless, while this approach can model more complex interactions, some of these can only occur indirectly inside the backtracking of their beam search.", "Furthermore, the disambiguation involves an auto-regressive decoding that, although mitigated by later efforts (De Cao et al., 2021a), has intrinsic speed limitations.", "In contrast, here we propose an extractive formulation, where a model receives as input the mention, its context and the text representation of each candidate, and has to extract the span corresponding to the representation of the entity that best matches the (mention, context) pair under consideration.", "Note that this differs from the aforementioned cross-encoder formulations (Logeswaran et al., 2019; Wu et al., 2020) where, instead, each entity was encoded together with the (mention, context) pair, but independently from all the other entities.", "With our schema, complex mention-entity and entity-entity interactions can be explicitly modeled by the neural system, as all the information is provided in input.", "Glancing over other related tasks in the area of semantics, arguably closest to our work is ESC 3 This bi-encoder, rather than performing the actual classification, is tasked to generate a filtered set of candidates.", "(Barba et al., 2021a), where the authors propose a new framing of Word Sense Disambiguation (WSD) as an extractive sense comprehension task.", "Yet, differently from their work, we propose here a new framing for ED, i.e. focus on entity descriptions rather than word sense definitions, present a baseline system that implements it and devise an additional architecture that deals with the computational challenges that arise from such implementation.", "We now introduce EXTEND, our proposed approach for ED.", "We first present the formulation we adopt (Section 3.1) and, then, describe the two architectures that implement it (Section 3.2).", "Inspired by recent trends in other semantic tasks (Barba et al., 2021a), we formulate Entity Disambiguation as a text extraction problem: given a query x q and a context x c , a model has to learn to extract the text span of x c that best answers x q .", "Formally, let m be a mention occurring in a context c m and denote by Cnd ( m ) = { cnd 1 , . . . , cnd n } the set of n text representations associated with each candidate of m .", "Then, we formulate ED as follows: we treat the tuple ( m, c m ) and the concatenation of cnd 1 , . . . , cnd n as the query x q and the context x c , respectively, and train a model to extract the text span from x c associated with the correct cnd Cnd ( m ) ; the overall process is illustrated in Figure 1. This formulation helps to better model the input provided, with the possible 2480 candidates of m included in the contextualization process, while also disposing of large output vocabularies as in De Cao et al. (2021b) and, yet, not resorting to auto-regressive decoding strategies.", "To implement our formulation, we consider two Transformer-based architectures.", "For both of these, the input is composed of the concatenation of the query x q and the context x c , subword-tokenized and separated by a [SEP] special symbol.", "Since x q is a tuple in our formulation, whereas Transformer models only support text sequences as input, we convert x q into a string x q by taking only c m and surrounding the text span where m occurs with the special tokens <t> and </t> .", "Additionally, to better separate entity candidate representations and ease their full span identification, we add a trailing special symbol </ec> to each of them; henceforth, we denote this resulting modified context by x c .", "As our first architecture, we use two independent classification heads on top of BART (Lewis et al., 2020) computing, respectively, for each word w in x c , whether w is the start or end of the correct entity representation cnd .", "We train the model with a cross-entropy criterion over the start and end of cnd .", "At inference time, we select the entity candidate representation cnd Cnd ( m ) whose joint probability over the 2 heads is highest.", "However, framing ED as we propose here implies that the length of the input to the model scales linearly with the number of output choices m .", "Taking into account that the attention mechanism of Transformer architectures has quadratic complexity and that several pre-trained models actually support inputs only up to a fixed maximum length, 4 this might pose significant computational limitations depending on the dataset and knowledge base under consideration.", "To cope with these technical challenges, we consider a second system, similar to the previous one but for two main differences.", "First, we change the underlying Transformer model, replacing BART with a pre-trained Longformer model (Beltagy et al., 2020), a Transformer architecture with an attention mechanism that is linear with respect to the input length and that can handle longer sequences.", "This linear complexity is achieved by essentially applying a sliding attention window over each token but for a few pre-selected ones (e.g. 4 For instance, the implementation of BART available in HuggingFace Transformers (Wolf et al., 2020) supports inputs only up to 1024 subwords.", "[CLS] ), which instead feature a symmetric global attention: they attend upon and are attended by all the other tokens in the input sequence.", "This global mechanism is intended to be task-specific and enables the model to learn representations potentially close to those standard fully-attentive Transformers would learn, while still maintaining the overall attention complexity linear with respect to the input size.", "Therefore, as our second modification, we adapt this global pattern to our setting, activating it on the [CLS] special token and on the first token of each cnd i Cnd ( m ) ; this allows to better mimic the original quadratic mechanism where different entity candidate representations can also attend upon each other.", "Furthermore, differently from Beltagy et al. (2020), we disable the global attention mechanism on the tokens in the query x q .", "In Section 5, we report and discuss the impact of these modifications.", "We illustrate the proposed architecture in Figure 2. 4 Entity Disambiguation Evaluation We now assess the effectiveness of EXTEND on Entity Disambiguation.", "We first introduce the experimental setup we consider (Section 4.1).", "Then, we present the results achieved by EXTEND both in terms of raw performances (Section 4.2) and via a breakdown of its behavior on different classes of label frequency (Section 4.3).", "For ease of readability, we focus here only on the Longformer-based architecture, which we consider as our main model.", "We defer the comparison with the BART-based system to Section 5. 4.1 Experimental Setup Data To evaluate EXTEND on Entity Disambiguation, we reproduce the same setting used by De Cao et al. (2021b).", "Specifically, we adopt their same candidate sets, which were originally proposed by Le and Titov (2018), 5 use Wikipedia titles (e.g. Metropolis (comics) ) as the text representation for entities and perform training, along with in-domain evaluation, on the AIDA-CoNLL dataset (Hoffart et al., 2011, AIDA ); similarly, we use their cleaned version of MSNBC , AQUAINT , ACE2004 , WNED-CWEB ( CWEB ) and WNED-WIKI ( WIKI ) (Guo and Barbosa, 2018; Evgeniy et al., 2013) for out-of-domain evaluation.", "While we use this AIDA-only training scenario, which we refer to as AIDA , to test the data efficiency of EXTEND, most ED systems actually make use of additional data and information originating from Wikipedia at training time.", "We denote this additional training scenario where Wikipedia is part of the training resources as Wikipedia+AIDA .", "Specifically, as our system is a supervised neural classifier, we follow De Cao et al. (2021b) and utilize BLINK data (Wu et al., 2020) for ED pretraining in this setting.", "A brief description of each dataset follows:", "i) AIDA : one of the largest manually annotated corpora for Entity Linking and Disambiguation.", "It contains 388 articles from the Reuters Corpus with 27 , 724 labeled mentions.", "The training set contains 18 , 448 instances, while the validation and test sets feature 4791 and 4485 samples, respectively.", "ii) MSNBC : a small news corpus with 20 articles from MSNBC on 10 different topics.", "It contains 656 annotated instances.", "iii) AQUAINT : a news corpus composed of 50 documents with news coming from the Xhinua News Service, the New York Times and the Associated Press.", "It contains 727 annotated instances.", "iv) ACE2004 : a manually annotated subset of the ACE co-reference data set (Doddington et al., 2004).", "It contains 257 annotated instances.", "v) CWEB : a dataset automatically extracted from the ClueWeb corpus 6 by Guo and Barbosa (2018) containing English Websites, consisting of 11 , 154 annotated instances.", "vi) WIKI : an automatically extracted corpus comprised of Wikipedia pages released by Evgeniy et al. (2013), with 6821 annotated instances.", "vii) BLINK : a dataset made up of 9 million (doc-ument, entity, mention) triples automatically extracted from Wikipedia.", "For each of these resources, 7 we use the preprocessed datasets, along with the mention candidate sets, made available by De Cao et al. (2021b) in the authors' official repository.", "8 6 https://lemurproject.org/clueweb12 7 Which are all freely available for research purposes.", "Evaluation Following common practice in ED literature, results over the evaluation datasets are expressed in terms of inKB Micro F 1 .", "Furthermore, to better highlight the performance on the out-of-domain datasets, we report both the average score over those and AIDA (Avg) and over those alone (Avg OOD ), that is, when the result on AIDA is excluded from the average.", "Comparison Systems In order to contextualize EXTEND performances within the current landscape of Entity Disambiguation, we evaluate our approach against recent state-of-the-art systems in the literature.", "Specifically, we consider: Global Models: Ganea and Hofmann (2017); Guo and Barbosa (2018); Yang et al. (2018, 2019); Le and Titov (2019); Fang et al. (2019); Local Models: Shahbazi et al. (2019) and Tedeschi et al. (2021); The auto-regressive approach proposed by De Cao et al. (2021b).", "EXTEND Setup As previously mentioned, we use the Longformer model (Beltagy et al., 2020) as our reference architecture and retrieve the pre-trained weights, for both its base and large variants, from the HuggingFace Transformers library (Wolf et al., 2020); we refer to these variants as EXTEND Base ( 139 M parameters) and EXTEND Large ( 435 M parameters).", "Following GENRE standard practice, we use the last encoder output for the representation of each token and a simple linear layer on top of it to compute the start and end tokens probability distributions.", "We use a 64-token attention window and fine-tune the whole architecture using the Rectified Adam (Liu et al., 2020) optimizer with 10 5 learning rate for at most 100 , 000 steps.", "We use 8 steps of gradient accumulation and batches made of a maximum of 1024 tokens.", "We evaluate the model on the validation dataset every 2000 steps, enforcing a patience of 15 evaluation rounds.", "We train every model for a single run on a GeForce RTX 3090 graphic card with 24 gigabytes of VRAM.", "Due to computational constraints, we do not perform any hyperparameter tuning, except for the attention window where we try [ 32 , 64 , 128 ], and select the other hyperparam-eters following previous literature.", "We implement our work in PyTorch (Paszke et al., 2019), using classy 9 as the underlying framework.", "We report in Table 1 (top) the inKB Micro F 1 score EXTEND and its comparison systems attain on the evaluation datasets in the Wikipedia+AIDA setting.", "Arguably the most interesting finding we report is the improvement EXTEND achieves over its comparison systems.", "EXTEND Large + BLINK, that is, EXTEND Large pre-trained on BLINK 10 and 9 https://github.com/sunglasses-ai/ classy 10 We note that, due to computational and hardware constraints, we were unable to match the training configuration 2483 then fine-tuned on AIDA, sets a new state of the art on 4 out of 6 datasets, with the only exceptions being in-domain AIDA and CWEB, where we fall short compared to the global model of Yang et al. (2018).", "On the Avg score, EXTEND pushes performances up by 0 .", "7 points, and this improvement becomes even more marked when considering Avg OOD ( +1 . 1 ).", "These results suggest that our approach is indeed well-suited for ED and, furthermore, is particularly effective when scaling out of domain.", "Additionally, we also evaluate EXTEND on the AIDA-only training setting and compare against De Cao et al. (2021b) and Tedeschi et al. (2021), the only systems available in this setting.", "As shown in Table 1 (bottom), EXTEND behaves better, with both EXTEND Base and EXTEND Large achieving higher Avg scores.", "In particular, EXTEND Base , which features only 149 M parameters, fares better (by almost 5 points) than De Cao et al. (2021b), whose model parameters amount to 406 M ( 2 . 7 ).", "Moreover, the Avg OOD results, which are also higher, further confirm our previous hypothesis as regards the benefits of our approach in out-of-domain scalability.", "Paired together, these results highlight the higher data efficiency that our formulation achieves, in comparison to its alternatives.", "Inspired by standard practices in the evaluation of Word Sense Disambiguation systems (Blevins and Zettlemoyer, 2020; Barba et al., 2021a), we perform a fine-grained analysis where we break down the performances of our model into different classes of label frequency.", "To this end, we partition both the AIDA test set and the concatenation of all the out-of-domain datasets in three different subsets:", "i) MFC , containing all the instances in the test set where the target mention is associated with its most frequent candidate in the training corpus (i.e. the AIDA training", "split).;", "ii) LFC , containing all the instances in the test set annotated with a least frequent candidate of the target mention that appeared at least once in the training corpus;", "iii) Unseen , containing all the instances in the test set whose mention was never seen in the training corpus.", "We then evaluate all the systems of the AIDA setting, except for De Cao et al. (2021b) for which of De Cao et al. (2021b) and our pre-training performed a significantly smaller number of updates.", "the original model is unavailable, on these six test sets.", "To put the results in perspective, we introduce a simple baseline (PEM-MFC) that consists in always predicting the most frequent candidate for each mention, taking mention-candidate frequencies from Le and Titov (2018).", "As we can see from Table 2, PEM-MFC is a rather strong baseline, confirming the skewness of the distribution with which each mention is annotated with one of its possible candidates towards the most frequent ones.", "Indeed, the gap between the performances of all the models on the MFC split and the LFC split is rather large, with a difference of almost 50 points in the out-of-domain setting.", "While future works should investigate the performances on these splits more in depth, here we can see that EXTEND Base and especially EXTEND Large outperform their competitors in the LFC and Unseen splits, in both the in-domain and out-of-domain settings.", "This highlights the strong generalization capabilities of our proposed approach, which is able to better handle rare or unseen instances at the cost of only 1 point in F 1 score on the MFC of the in-domain setting.", "While the above-mentioned experiments showed our approach to be rather effective, we only focused on the Longformer-based architecture, to which we resorted owing to the computational challenges we mentioned in Section 3.2.", "We now investigate this model choice, evaluating first how the BART-based system fares.", "Then, we ablate the attention pattern we propose for the Longformer and, finally, discuss the trade-off between our two proposed architectures.", "BART Strictly speaking, the results we reported in the previous Section are not exactly conclusive as to whether or not our formulation is beneficial.", "Indeed, while it is true that we use a new formulation, 2484 we also rely upon a Transformer model that none of our comparison systems considered.", "Therefore, to better pinpoint the origin of the improvements, we train our BART-based architecture in the AIDA setting; we refer to this model as BART.", "Note that the underlying Transformer is identical to that of De Cao et al. (2021b), except for the final classification heads.", "11 As shown in Table 3, BART with our extractive formulation attains significantly better performances.", "This finding suggests that the overall improvement does indeed originate from our extractive formulation.", "Furthermore, as the two systems are entirely identical except for the framing adopted, this finding further underlines the data efficiency of our approach.", "Longformer Ablations We now compare our chosen global attention strategy with two standard alternatives.", "First, we consider the schema originally proposed by Beltagy et al. (2020) for question-answering tasks, where all the tokens in the input query (i.e. the text containing the mention) have a global attention (Longformer query ).", "Then, we compare against an EXTEND variant where the only token with global attention enabled is the start of sequence token (i.e. [CLS] ).", "Table 3 shows how the three systems behave, reporting both their in-domain and out-of-domain scores, along with the average percentage of tokens in the input sequence with global attention enabled (GA%).", "From these results, we can see that", "i) our approach fares the best and that", "ii) Longformer CLS achieves performances almost in the same ballpark, making it a viable option for more computationally limited scenarios.", "BART and Longformer Finally, we compare our two architectures.", "As we can see from Table 3, BART performs better in the in-domain dataset, whereas the Longformer outperforms it in the out-of-domain setting.", "Nevertheless, neither of these differences is very significant and, thus, this result confirms our initial hypothesis that using our second architecture is a valid approximation of the standard quadratic attention strategy for the extractive Entity Disambiguation task.", "To further investigate the generalization capabilities of EXTEND, we performed a black-box testing (Ribeiro et al., 2020) of our system leveraging the available test sets.", "Apart from the problem of label frequencies (e.g. unseen entities), we discovered two additional main classes of errors, namely", "i) insufficient context , and", "ii) titles alone might not be enough .", "Insufficient Context Since the average number of candidates for each mention is roughly 50 , the probability of having multiple valid candidates given the input context is far from negligible.", "For instance, let us consider the following example: In the last game Ronaldo scored two goals despite coming from a bad injury. .", "In this sentence, the mention Ronaldo can refer both to Cristiano Ronaldo, the Portuguese player, and to Ronaldo de Lima, the Brazilian player.", "While this particular problem holds for several instances in the test sets, the performance drop is, in fact, mitigated by the labels skewness towards the most frequent candidates.", "Indeed, the model appears always to predict the most frequent candidate for this kind of instance, therefore being right in the majority of cases.", "Titles might not be enough For both comparability and performance purposes, the text representation we use for a given entity in this work is simply its Wikipedia title.", "While article titles in Wikipedia are rather informative, in several circumstances they do not contain enough information to make them sufficiently distinguishable from other candidates.", "For example, several pages describing Per-sons are entitled just with their respective names and surnames.", "This kind of identifier is especially ineffective if the mentions taken into consideration were not present in the training dataset, or were rare or unseen during the underlying Transformer pre-2485 training.", "To this end, we strongly believe that future research might benefit from focusing on enriching entities' identifiers by adding a small description of the articles (summary) or at least some keyword representing the domain the entity belongs to.", "In this work we presented EXTEND, a novel local formulation for ED that frames this task as a text extraction problem: given as input a string containing a marked mention in context and the text representation of each entity in its candidate set, a model has to extract the span corresponding to the text representation of the correct entity.", "Together with this formulation, we also presented two Transformer models that implement it and, by evaluating them across several experiments, we found our approach to be particularly suited to ED.", "First, it is extremely data efficient, surpassing its alternatives by more than 6 F 1 points when considering an AIDA-only training setting.", "Second, pre-training on BLINK data enables the model to set a new state of the art on 4 out of 6 benchmarks under consideration and yield average improvements of 0 .", "7 F 1 points overall and 1 .", "1 F 1 points when focusing only on out-of-domain evaluation datasets.", "As future work, we plan to relax the requirements towards the candidate set and explore adapting this local formulation to a global one, so as to enforce coherence across predictions.", "For instance, we believe integrating the feedback loop strategy we proposed in Barba et al. (2021b) would be an interesting direction to pursue.", "This work was partially supported by the MIUR under the grant Dipartimenti di eccellenza 2018-2022\" of the Department of Computer Science of Sapienza University. References Edoardo Barba, Tommaso Pasini, and Roberto Navigli. 2021a. ESC: Redesigning WSD with extractive sense comprehension. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies , pages 46614672, Online. Association for Computational Linguistics. Edoardo Barba, Luigi Procopio, and Roberto Navigli. 2021b. ConSeC: Word sense disambiguation as continuous sense comprehension. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing , pages 14921503, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. Michele Bevilacqua, Rexhina Blloshmi, and Roberto Navigli. 2021. One SPRING to rule them both: Symmetric AMR semantic parsing and generation without a complex pipeline. In Proceedings of AAAI . Terra Blevins and Luke Zettlemoyer. 2020. Moving down the long tail of word sense disambiguation with gloss informed bi-encoders. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics , pages 10061017, Online. Association for Computational Linguistics. Jan A. Botha, Zifei Shan, and Daniel Gillick. 2020. Entity Linking in 100 Languages. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) , pages 78337845, Online. Association for Computational Linguistics. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Sckinger, and Roopak Shah. 1994. Signature verification using a \"siamese\" time delay neural network." ]
[ "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "method", "other", "objective", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "other", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "result", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other" ]
[ "Brown and Exchange word clusters have long been successfully used as word representations in Natural Language Processing (NLP) systems.", "Their success has been attributed to their seeming ability to represent both semantic and syntactic information.", "Using corpora representing several language families, we test the hypothesis that Brown and Exchange word clusters are highly effective at encoding morphosyntactic information.", "Our experiments show that word clusters are highly capable of distinguishing Parts of Speech.", "We show that increases in Average Mutual Information, the clustering algorithms' optimization goal, are highly correlated with improvements in encoding of morphosyntactic information.", "Our results provide empirical evidence that downstream NLP systems addressing tasks dependent on morphosyntactic information can benefit from word cluster features.", "Distributionally generated word classes (often referred to as word clusters ) are hard clusters, containing all word types observed in a corpus, allocated to clusters based on contextual information observed in the corpus.", "They have found wide use in Natural Language Processing (NLP) systems as an alternative to word embeddings such as word2vec (Mikolov et al., 2013).", "Word clusters differentiate themselves from word embeddings by requiring estimation of many fewer parameters, and by their ability to derive qualitative representations from smaller corpora (Qu et al., 2015; Bansal et al., 2014).", "Brown Clusters (Brown et al., 1992) are a wellknown approach based on hard, hierarchical, distributionally derived groups of word types observed in a corpus of unstructured text, with Average Mutual Information (AMI) as the optimization goal.", "Exchange Clusters are an alternative approach obtained by applying the Exchange Algorithm (Kneser and Ney, 1993) to the same optimization goal.", "Unlike Brown, Exchange outputs a flat clustering, with no hierarchy (Martin et al., 1998).", "When only the bottom of the hierarchy is used, like in this paper, Exchange and Brown clusters are interchangeable.", "Both Brown and Exchange clusters have been used as word representations for various Natural Language Processing tasks such as Part of Speech tagging in clean and noisy text (Swain and Cole, 2016; Owoputi et al., 2013; Derczynski et al., 2015), dependency parsing (Koo et al., 2008; Bansal et al., 2014), Chinese Word Segmentation (Liang, 2005), and Named Entity Recognition (Swain and Cole, 2016; Derczynski et al., 2015; Liang, 2005).", "Word clusters distinguish themselves from word embedding models by their ability to learn from little data (Bansal et al., 2014; Qu et al., 2015); for example, in cases like (Bansal et al., 2014), word clusters outperform other kinds of representations, including word embeddings.", "In the literature, it is often observed that word clusters seem to encode a considerable amount of morphosyntactic and semantic knowledge (Brown et al., 1992; Derczynski et al., 2015).", "However, it has not yet been studied to which extent such knowledge is encoded, as previous work on Brown and Exchange clusters focuses mostly on algorithmic improvements and on applications to different NLP tasks.", "In this work, we present a principled study of the morphosyntactic information encoded in flat word clusters induced exclusively from class-based language models via Brown Clustering and Exchange algorithm.", "In particular, we focus on how well these approaches derive clusters that represent Parts of Speech as a measure of the morphosyntactic information encoded.", "We find that Brown and Exchange clusters are highly effective at representing morphosyntactic information, even when hyper-parameters are set such that they match only the number of Parts of Speech, thereby grouping into relatively few word clusters only.", "Our results provide empirical evidence for the observed performance gains when including Brown and Exchange word clusters as features in NLP systems that rely on morphosyntactic information.", "Furthermore, we find that there is a strong correlation between the optimization goal of Brown clustering and the Exchange Algorithm (i.e., Average Mutual Information), and performance at Parts of Speech separation, which again confirms the appropriateness of choosing AMI in word clustering for morphosyntactic information.", "Class-based language models address the problem of brittleness in classic n-gram language models by trading precision for performance stability over different text styles (Brown et al., 1992).", "Brown Clustering (Brown et al., 1992) and Exchange (Kneser and Ney, 1993) are greedy algorithms that construct word classes by optimizing for higher Average Mutual Information (AMI).", "Maximizing Average Mutual Information is a proxy for maximizing the log-likelihood of the underlying class-based language model on the given corpus (Martin et al., 1998).", "Despite their age, most research on Brown or Exchange clusters has so far followed two major directions: algorithm improvements and applications in Natural Language Processing.", "In contrast little focus has been placed on understanding and evaluating the information content of the clusters.", "In the direction of algorithm improvements, work has been done on the effect of greedy merge choices in Brown Clustering (Derczynski and Chester, 2016; Ciosici, 2015) and extension of AMI to n-grams (Martin et al., 1998).", "Model relaxations, particularly to Exchange, aim to improve computational performance by reducing the effect of words swapping clusters (Dehdari et al., 2016; Uszkoreit and Brants, 2008).", "As mentioned earlier, both Brown and Exchange clusters have seen many applications in Natural Language Processing (NLP) systems: PoS tagging (Swain and Cole, 2016; Owoputi et al., 2013; Derczynski et al., 2015), dependency parsing (Koo et al., 2008; Bansal et al., 2014), Chinese Word Segmentation (Liang, 2005), and Named Entity Recognition (Swain and Cole, 2016; Derczynski et al., 2015; Liang, 2005).", "Most of this work, like (Swain and Cole, 2016) uses the word clusters as sources of features which are combined with hand-designed ones.", "While word clusters derived using Exchange and Brown clustering have found wide use in NLP systems, their use has been based on the assumption that they encode morphosyntactic and semantic information rather that a principled use.", "In relation to Parts of Speech, early on Martin et al. (1998) concluded that initializing Exchange with PoS-homogeneous clusters has no effect on final clustering AMI, but that it does help accelerate convergence.", "More recently, Christodoulopou-los et al. (2010) found that Brown clusters match the performance of more sophisticated clustering methods, despite their simple algorithmic construction.", "The study focused on using word clustering algorithms as sources of prototypal information to prototype-driven learning models for classification.", "In this paper, we study the amount of morphosyntactic information encoded in Brown and Exchange word clusters with the goal of providing empirical results for a principled use of such clusters in downstream tasks.", "In order to determine the amount of morphosyntactic information encoded in Brown and Exchange word clusters, we measure their ability to separate word types by their Parts of Speech.", "For this, we require cluster quality measures.", "Brown and Exchange clusters do not exist in a metric space; therefore unsupervised cluster quality measures relying on distances between points or clusters, such as the Silhouette coefficient (Rousseeuw, 1987), cannot be used.", "Instead, we focus on two quality measures that compare clusters with a ground truth partitioning.", "We work under the hypothesis that Brown and Exchange clusters represent parts of speech and thus, we consider parts of speech as the ground-truth partitioning of the data.", "This makes it possible to use cluster quality measures that require as input an existing ground-truth partitioning.", "We use PoS tags resulting from manual or automatic annotation.", "We evaluate using a widespread and easy to interpret measure based on overlap ( purity ), and an information theoretical measure ( Adjusted Mutual Information ).", "Cluster purity measures how many points in a clustering (in our case words) have been assigned to a cluster whose predominant label they share (e.g. adjectives clustered with other adjectives, nouns with other nouns etc).", "Intuitively, it measures the percentage of points properly classified (via their cluster membership).", "Formally, cluster purity is defined as: purity ( C i ) = 1 | C i | | L | max l =1 | label ( C i , l ) | (1) purity ( C ) = k (cid:88) i =1 | C i | | V | purity ( C i ) (2) = 1 | V | k (cid:88) i =1 | L | max l =1 | label ( C i , l ) | (3) Where the function label ( C i , l ) provides the number of elements from C i with label l and L is the set of labels.", "Purity reaches a value of 1 when the clustering is identical to the ground-truth partitioning, or each point is allocated to its own cluster (i.e. k = | V | ).", "When k = 1 , purity is equal to the fraction of points labeled with the most popular label, and thus provides a baseline.", "In our case, that is equal to the percentage of vocabulary allocated to the most popular PoS class, usually nouns.", "For values of k (1 , | V | ) , it varies depending on cluster quality.", "If k > | L | , purity can take a value of 1 if each cluster is a subset of a ground partition.", "Thus, purity is expected to increase as k grows higher than | L | .", "In our experiments, purity measures the percentage of vocabulary that is labeled correctly.", "In other words, purity does not depend on word frequency.", "Thus, it is not an approximation of PoS tagging accuracy, like the M-1 measure used by Bansal et al. (2014).", "Since we focus on morphosyntactic information encoded in word clusters, we do not want a measure that takes into account word frequency in the given corpus (i.e, one that is a good approximation of PoS tagging per-formance), but one that focuses exclusively on the clusters and their content.", "cluster membership is randomly assigned, as it is easier for smaller clusters to randomly achieve label agreement.", "Adjusted Mutual Information (Ad-jMI) (Vinh et al., 2009), not to be confused with AMI (Brown and Exchange's optimization goal), measures the amount of information shared by the ground truth partitioning U and a clustering C .", "In our evaluation that corresponds to the amount of information shared by the PoS ground-truth partitioning and the clustering resulting from Brown or Exchange.", "AdjMI corrects for the mutual information expected to exist between the ground truth partitioning U and a random clustering.", "Formally, it is defined as: AdjMI ( U, C ) = MI ( U, C ) E { MI ( U, C ) } avg { H ( U ) , H ( C ) } E { MI ( U, C ) } (4) Where MI and H stand for Mutual Information and Entropy, respectively.", "Intuitively, AdjMI measures how much information we gain about a point's membership in to a cluster in the ground-truth partitioning U , when we know its membership to a cluster in an induced clustering C , and the other way around.", "As k increases over the number of ground partitions L , AdjMI has the opposite effect to purity , i.e., it scores lower due to the higher effect attributed to randomness.", "Just like purity , AdjMI takes values in the interval [0 , 1] .", "An AdjMI value of 1 corresponds to a clustering identical to the ground-truth partitioning, while a value of 0 corresponds to a clustering that is not better than a random allocation of points to clusters.", "Unlike purity , values of AdjMI cannot be interpreted to say anything about the percentage of points that have been properly allocated.", "In other words, a value of, say 0 .", "3 , does not indicate that 30% of the points have been properly separated.", "We use manually annotated data from Universal Dependencies (UD) (Leung et al., 2017) for English, French and Czech 1 .", "We chose the group of languages so that it represents different language families.", "Our choice of languages is based on the amount of manually labeled data, and the presence of each language in the larger, not annotated, EuroParl corpus.", "We append the manual or automatic PoS tags and convert text to lowercase.", "Therefore, a sentence such as Words have meaning. is transformed into words NOUN have VERB meaning NOUN . PUNCT.", "Both word clustering algorithms studied in this paper are insensitive to the appended PoS tags as they operate at word and not character level.", "The appended tags allow us to evaluate the quality of word clusters using the measures described in the previous section.", "We replace all numbers, dates, times, URLs and emails with placeholders in order to reduce vocabulary size.", "Universal Dependencies is the largest manually annotated corpus we have access to.", "For experiments on larger corpora, we use the unlabeled EuroParl corpus (Koehn, 2005).", "More specifically, the English-French and English-Czech pairs.", "Since manually annotated PoS tags are not available for Europarl, we append automatically assigned PoS tags, obtained by using UDPipe (Straka and Strakova, 2017) pretrained on manually annotated corpora from Universal Dependencies.", "We use flat clusters from the Exchange clustering algorithm for all experiments reported in this section as they outperform the flat clustering resulting from Brown in terms of Average Mutual Information (their optimization goal), Adjusted Mutual Information (AdjMI) and cluster purity .", "All observations in the following section also apply to the flat clusters resulting from Brown.", "For interested readers, we include all experiments with Brown Clustering as supplementary material.", "The fact that Exchange outperforms Brown Clustering in terms of AMI is well-understood (Brown et al., 1992), but its effect on cluster content is not.", "Using Exchange, we induce flat clusterings with k in the range 18 to 800 .", "We start with k = 18 as it matches the observed number of distinct PoS tags in the Universal Dependencies corpora ( 17 distinct tags and one catch-all tag).", "When setting the hyper-parameter k to be higher than 18 , if Exchange separates clusters by Parts of Speech, then the expectation is that clusters are subsets of words sharing the same PoS tags, and that purity for such clusterings will be high.", "In Figure 1a, we show purity measured on the aforementioned clusters.", "We can see that, even when the number of clusters is equal to that of PoS tags ( k = 18 ), between 55% and 62% of the vocabulary is properly separated.", "Purity increases as k increases towards 100 .", "At k = 500 and k = 800 , between 64% and 70% of the vocabulary is grouped based on PoS, not that much more than at k = 100 .", "Increasing the number of clusters k to high values is not guaranteed to improve purity , for any of the languages studied.", "This is contrary to the expectation that purity increases when k > 18 .", "This indicates that Exchange and Brown do not exclusively optimize for Part of Speech separation.", "We believe the clustering algorithms might be striking a balance between encoding semantic and morphosyntactic information, since at higher values of k we usually see more clusters with a coherent semantic theme such as names of geographic locations, names of men, names of women, nouns determining times, similar to the clusters observed in previous literature (Brown et al., 1992).", "For example, when using k = 18 in English, the token cat NOUN appears in the same cluster as the plural version cats NOUN.", "At k = 800 , the clusters distinguish between the two tokens and cat NOUN is placed together with a number of nouns in the singular such as budget NOUN, computer NOUN, pet NOUN, wheel NOUN, while cats NOUN is placed in a cluster of mostly pluralized nouns like chil-dren NOUN, rooms NOUN, dogs NOUN, families NOUN.", "Adjusted Mutual Information (AdjMI) for the same clusterings, Figure 1b, shows a considerable decrease as the hyper-parameter k increases, especially at high values of 500 and 800 .", "This is in line with the expected punishment due to the effect attributed to randomness (see the term for expected value of Mutual Information in Equation (4)).", "At values of k closer to the number of PoS tags in the data, AdjMI varies little from one clustering to the other.", "More interestingly, the relative order of separation performance between the languages studied is maintained going from purity to AdjMI , suggesting that no measure-specific effects are at play.", "By studying the frequency of incorrectly classified words types (i.e., of those whose PoS tag does not match the most popular one in their cluster), we find that most (about 85% ) occur less than 5 times in the corpora.", "Such few observations likely do not provide enough information for Brown or Exchange to properly place those words.", "Therefore, from the already computed clusterings, we remove words with a frequency less than 5 and recalculate the two quality measures.", "In Figures 1c and 1d, we can see that both purity and AdjMI improve considerably.", "Even in the most difficult case ( k = 18 ), where the number of clusters matches that of distinct PoS tags, between 68% and 78% of words are properly placed, an increase of 21% 28% compared to the values in Figure 1a.", "For AdjMI , the scores more than double.", "These results show that even for small corpora, a large amount of morphosyntactic information can be encoded, completely unsupervised, using the Exchange clustering algorithm.", "(The same behavior can be observed for clusters derived using the Brown Clustering algorithm, see supplementary material.)", "It also shows that, for low frequency terms, there is not enough contextual information for a proper clustering.", "One disadvantage of thresholding by frequency is that, due to the zipfian distribution of word frequencies in natural language, only a fraction of the original vocabulary remains after filtering out words with a frequency less than 5 : English ( 8 143 words 24 . 01% ), French ( 9 020 words 17 . 45% ), Czech ( 37 026 words 22 . 51% ).", "In order to benefit from more reliable word usage estimates, it is necessary to perform the same experiment on larger corpora.", "Unfortunately bigger manually annotated data sets do not exist.", "We therefore turn to automatic PoS tagging.", "We use UDPipe (Straka and Strakova, 2017) with models pretrained on the Universal Dependencies corpora to automatically tag text from the EuroParl multi-language corpus containing transcriptions of European Parliament proceedings (Koehn, 2005).", "Automated annotations introduce labeling noise that should lead to a decrease in separation performance.", "Despite this, we expect to still be able to observe good PoS separation.", "After filtering the EuroParl corpora, the size of remaining vocabulary is considerably larger: English ( 60 373 words 37 . 80% ), French ( 78 822 words 38 . 60% ), Czech ( 62 512 words 35 . 19% ).", "In Figure 2a we can see that there is a drop in performance that varies with language, but when looking at purity, even in the worst performing clustering (French at k = 50 ), 60% of the vocabulary is still properly separated according to Parts of Speech.", "A drop in performance can also be observed for AdjMI in Figure 2b, with the value dropping for all languages, in some cases reducing by half.", "More interestingly, the relative performance order of the languages is changed.", "PoS separation for Czech outperforms that of the other languages.", "Actually, PoS separation for Czech on EuroParl data (Figure 2) is scored higher than that of Czech on Universal Dependencies (Figures 1c and 1d).", "The source of this improvement requires more study for a proper attribution, but could be due to beneficial noise introduced by the automatic tagging, or due to the introduction of more sentence structure by human translators.", "The fact that even at low values of k , for all languages studied, on both corpora, Exchange Word clusters (and also Brown word clusters, see supplementary material) can successfully separate by Parts of Speech, helps understand why word clusters have had such success at PoS tagging whether coupled with Markov Models (Derczynski et al., 2015), Markov Models and morphological features (Owoputi et al., 2013), or just by themselves via M-1 (Bansal et al., 2014).", "Neither Exchange, nor Brown are guaranteed to converge to a global optimum.", "Both are greedy algorithms that optimize for high Average Mutual Information (AMI).", "As we have mentioned earlier, word clusters resulting from Exchange outperform those induced using the Brown clustering algorithm in terms of both AMI (the algorithm's optimization goal), PoS purity and Adjusted Mutual Information (AdjMI) .", "A natural question to ask is: can one improve the morphosyntactic content of word clusters by obtaining higher AMI, maybe by 0 100 200 300 400 500 600 700 800 Number of clusters 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P u r i t y EnglishFrenchCzech", "developing new and better AMI-based clustering algorithms?", "We answer this question by studying the correlation between Average Mutual Information and the two cluster quality measures used earlier: purity and AdjMI .", "Brown clustering is a predictable, bottom-up, agglomerative, hard clustering algorithm that for the same hyper-parameter k , generates the same clusters and therefore only one data sample.", "2 However, the Exchange algorithm is an iterative clustering algorithm that has a complete and valid cluster partitioning at the end of each iteration.", "Thus, we can also measure morphosyntactic content in each of these clusterings.", "In our experiments, we only obtain 10 different data samples from each run of the algorithm, not enough for a correlation analysis.", "In order to collect more data samples (i.e. more clusterings), we suggest using a stochastic version of Exchange where a percentage of all swaps are performed at random, rather than with the goal of improving AMI.", "This version of Exchange termi-2 Assuming a stable and repeatable tie-breaking process; this is undefined in the literature.", "nates based on the number of iterations, and generates valid word partitionings of varying quality (from an AMI perspective) at the end of each iteration.", "In this manner, it provides us with more data points (i.e. more different clusterings) for analysis.", "Due to the small amount of random swaps, at varying AMI , we obtain a sufficient number of distinct clusterings to perform a correlation study with sufficient data.", "With the stochastic implementation of Exchange, we run 50 iterations for all languages and k combinations studied earlier.", "In Tables 2 and 3, we show the Pearson and Spearman correlation coefficients between AMI of all clusterings generated by StochasticExchange for a given run, and the two scores used earlier: purity and AdjMI .", "Due to space considerations, we only show results for k = 18 (i.e., same number of clusters as the number of PoS tags).", "Correlation coefficients for other combinations are included in the supplementary material.", "p < 0 .", "01 for all correlation experiments here and in the supplementary material and for both correlation coefficients.", "The analysis presented below also holds for all the correlation re-0 100 200 300 400 500 600 700 800 Number of clusters 0.3 0.4 0.5 0.6 0.7 0.8 0.9 P u r i t y English French Czech", "(a) Cluster purity.", "Dotted lines are baselines for k = 1 0 100 200 300 400 500 600 700 800 Number of clusters 0.1 0.2 0.3 0.4 0.5 A d j MI English French Czech", "For both purity and AdjMI , there is a strong Pearson correlation between higher AMI and better values of the evaluation score.", "This is indepen-dent of the language studied or the number of clusters derived.", "For Spearman, except for one case, in all combinations studied, there is a high correlation, although to a slightly less extreme degree as with Pearson.", "Our experiments show that there is strong correlation between AMI and performance in separation of Parts of Speech as measured by purity and AdjMI .", "The strong correlation provides grounding for research into new AMI-maximizing word clustering algorithms that can achieve higher AMI than Exchange, or Brown, as such algorithms might be able to separate Parts of Speech even better.", "In previous sections, we studied the ability of word clusters to encode morphosyntactic information.", "We clustered word types from unstructured text, where each token had its Part of Speech tag appended.", "The post-pended PoS tags are not used by either Brown, or Exchange.", "They are essentially invisible to the algorithms, since the both Brown and Exchange recognize words exclusively by internally assigned integer IDs and do not operate at character level.", "However, post-pending PoS tags does introduce some information into the text by providing PoS-role disambiguation for each word occurrence.", "For example, without post-pended PoS tags, both Exchange, and Brown algorithms, would conflate the two distinct grammatical roles of show in the sentence: Everyone must show their show tickets at the entrance.", "In this section, we study PoS separation effects caused by such polysemy on Brown and Exchange word clusters.", "Both Exchange and Brown construct hard clusters, i.e. each word can be assigned to exactly one word class.", "Thus, words with multiple roles, such as denominal verbs or deverbal nouns, cannot be differentiated by the algorithms when operating on corpora from languages where the such morphological derivations are performed without employing suffixes or prefixes.", "In other words, if the lexical form does not change, neither Brown nor Exchange can identify which tokens represent what grammatical role.", "The extent of this effect is dependent on language.", "In English, for example, nouns are often turned into verbs without changing the lexical form through morphological derivation, e.g. show as a verb vs show as a noun.", "On the other hand, Czech is highly inflected accounting for gender, case, number and person.", "This property of each language was not problematic in the experiments we have performed so far due to the fact that post-pending the PoS tag from ground-truth (or automatic tags) effectively provides disambiguation of grammatical role.", "Measuring on the Universal Dependencies corpora, we find that the percentage of polyclass words (i.e. word types that are assigned more than one PoS class tag throughout the corpus) varies by language and increases (as percentage of remaining vocabulary) as we raise the minimum frequency threshold, see Table 4. For English and French, up to 43% of the vocabulary words have more than one tag, while only 5 , 5% of the Czech vocabulary shares the same property.", "Part of the reason why so many words have multiple PoS tags has to do with how the various language families derive new words, and part of the reason stems from errors in PoS tagging of large text corpora (Silberztein, 2018).", "From a practical point of view, polysemous words create an upper bound on the effectiveness of hard clustering for Part of Speech separation (PoS).", "In Figure 3, we show PoS purity for clusters induced over Universal Dependencies (UD) corpora, where we consider all polyclass words as clustered incorrectly.", "We also show the minimum purity (when k = 1 ) as well as the upper bound given by the polysemy of each language as observed from the manual labels.", "The evaluation strictly penalizes multiclass polysemy and ignores errors in labeling, such as those identified by Silberztein (2018).", "For example, in the UD English corpus, even though only 3 occurrences of the word them are incorrectly tagged as adverb, while the remaining 750 are correctly labeled as pronoun.", "We defer to the data and consider the word to be impossible to correctly allocate to a cluster.", "We use such a strict evaluation as it provides a lower bound on what can be expected from Exchange and Brown clusters given the current data.", "Correcting PoS tags in the data would probably improve PoS separation, however, such corrections are outside the scope of the work in this paper.", "We should point out that this evaluation is not representative of the expected PoS tagging performance of word clusters on any given corpus, as for such taggers one would employ a different strategy, such as, for instance, always outputting the most popular PoS tag for any given word type.", "On top of that, our evaluation here does not take into account the frequency of tokens, which would be highly relevant for PoS taggining performance, but not for our evaluation.", "As expected, the most affected language is English, due to the high level of polysemy in the data.", "Here purity drops from 72 .", "4 to 42 .", "32 for k = 18 , when compared with results in Figure 1c.", "It is followed by a 20 point drop for French, and only a few points for Czech, the most morphologically rich of the three and with the least amount of ambiguity in grammatical role.", "The results suggest that even in the presence of language ambiguity, and considering the strictest evaluation, Exchange and Brown clusters successfully encode a considerable amount of morphosyntactic information, which varies by language.", "These, together with results presented earlier in this paper provide empirical evidence for using word clusters as word representations in downstream NLP systems addressing tasks that rely on morphosyntactic knowledge of the language targeted (e.g. dependency parsing), or for use in new paradigms such as data programming (Ratner et al., 2016), where cluster membership can be a strong signal for probabilistic data labeling, even when considering language ambiguity.", "In this paper, we quantified the amount of morphosyntactic information encoded in Brown and Exchange word clusters, in a number of languages, from different language families.", "Our empirical quantification helps explain the success of word clusters as word representations in NLP tasks that rely on morphosyntactic information, such as PoS tagging and Named Entity Recognition.", "It further provides empirical evidence for using word clusters as word representations in other NLP tasks that require morphosyntactic knowledge of the language targeted (e.g. dependency parsing), or for use in new paradigms such as data programming (Ratner et al., 2016), where cluster membership can be a strong signal for probabilistic data labeling.", "We have also shown that there is a strong correlation between AMI (Brown and Exchange's optimization goal) and performance in PoS separation.", "The strong correlation demonstrated provides grounding for research into new AMI-maximizing word representation algorithms that can achieve even better AMI optimization than Exchange or Brown." ]
[ "abstain", "abstain", "method", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "method", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain" ]
[ "Word embeddings learned in two languages can be mapped to a common space to produce Bilingual Word Embeddings (BWE).", "Unsupervised BWE methods learn such a mapping without any parallel data.", "However, these methods are mainly evaluated on tasks of word translation or word similarity.", "We show that these methods fail to capture the sentiment information and do not perform well enough on cross-lingual sentiment analysis.", "In this work, we propose UBiSE (Un-supervised Bilingual Sentiment Embeddings), which learns sentiment-specific word representations for two languages in a common space without any cross-lingual supervision.", "Our method only requires a sentiment corpus in the source language and pretrained monolingual embeddings of both languages.", "We evaluate our method on three language pairs for cross-lingual sentiment analysis.", "Experimental results show that our method outperforms previous unsupervised BWE methods and even supervised BWE methods.", "Our method succeeds for a distant language pair English-Basque.", "Lack of annotated corpora degrades the quality of sentiment analysis in low-resource languages.", "Cross-lingual sentiment analysis tackles this problem by adapting the sentiment resource in a resource-rich language (the source language) to a resource-poor language (the target language).", "Bilingual Word Embeddings (BWE) provide a way to transfer the sentiment information from the source language to the target language.", "There has been an increasing interest in BWE methods in re-cent years, including both supervised methods and unsupervised methods.", "Supervised BWE methods map the word vectors of the two languages in a common space by exploiting either a bilingual seed dictionary or other parallel data, while unsupervised BWE methods do not utilize any form of bilingual supervision.", "Yet, these methods are mostly evaluated on tasks of word translation or word similarity, and do not perform well enough on cross-lingual sentiment analysis as shown in Section 4. Consider the case where we want to perform sentiment analysis on the target language with merely an annotated sentiment corpus in the source language.", "We assume pretrained monolingual embeddings of both languages are available to us.", "One solution is to first align the embeddings of both languages in a common space using unsupervised BWE methods, then train a classifier based on the source sentiment corpus.", "In this solution, no sentiment information is utilized to learn the alignment.", "In this paper, we propose to exploit the sentiment information and learn sentiment-specific alignment.", "The sentiment information is gradually incorporated into the BWE through an iterative constraint relaxation procedure.", "Unlike previous work which performed alignment in a single direction by linearly mapping the source vectors to the target vector space, we propose an alignment model that maps the vectors of the two languages to a new shared space with two non-linear transformations.", "Our model is able to separate positive vectors from negative vectors in the bilingual space and allow such sentiment information to be transferred to the target language.", "Our main contributions are as follows: 1. We propose a novel approach to learn bilingual sentiment-specific word embeddings without any cross-lingual supervision and perform cross-lingual sentiment analysis with minimum resource requirement.", "We propose an iterative constraint relaxation procedure that gradually incorporates the sentiment information into the BWE.", "Our proposed approach achieves state-of-the-art results.", "2. We introduce a novel sentiment-specific objective without having to explicitly build a classifier.", "Our approach is more explainable and better balances sentimental similarity and semantic similarity compared to previous approaches.", "3. We introduce an alignment-specific objective and a simple re-normalization trick.", "Unlike previous BWE methods that learn orthogonal mappings, we introduce non-orthogonal mappings which enable the transfer of sentiment information from the source language to the target language.", "Cross-Lingual Sentiment Analysis Existing approaches for cross-lingual sentiment analysis can be mainly divided into two categories:", "(i) approaches that rely on machines translation (MT) systems", "(ii) approaches that rely on cross-lingual word embeddings.", "Standard MT-based approaches perform crosslingual sentiment analysis by translating the sentiment data into a selected language (e.g. En-glish).", "More sophisticated algorithms including co-training (Wan, 2009; Demirtas and Pech-enizkiy, 2013) and multi-view learning (Xiao and Guo, 2012) have been shown to improve performance.", "Zhou et al. (2015, 2016b,a) performed crosslingual sentiment analysis by learning bilingual document representations.", "These methods translate each document into the other language and enforce a bilingual constraint between the original document and the translated version.", "Bilingual Word Embeddings Word embeddings trained separately on two languages can be aligned in a shared space to produce Bilingual Word Embeddings (BWE), which support many NLP tasks including machine translation (Lam-ple et al., 2017), cross-lingual sentiment analysis (Barnes et al., 2018; Zhou et al., 2015) and crosslingual dependency parsing (Guo et al., 2015).", "BWE can be obtained in a supervised way using a seed dictionary (Joulin et al., 2018; Artetxe et al., 2016), or in an unsupervised way without any bilingual data.", "Adversarial training was the first successful attempt to learn unsupervised BWE (Zhang et al., 2017; Conneau et al., 2017).", "Self-learning was proposed by (Artetxe et al., 2017)to learn BWE with minimum bilingual resources, which was later extended into a fully unsupervised framework by adding an unsupervised dictionary initialization step (Artetxe et al., 2018).", "Multilingual Word Embeddings BWE methods can be extended to the case of multiple languages by simply mapping all the languages to the vector space of a selected language.", "However, directly learning multilingual word embeddings (MWE ) in a shared space has been shown to improve performance (Ammar et al., 2016; Duong et al., 2017; Chen and Cardie, 2018; Alaux et al., 2018).", "Yet, all these approaches are mainly evaluated on word translation and their effectiveness on cross-lingual sentiment analysis have not been empirically compared.", "Sentimental Embeddings Continuous word representations encode the syntactic context of a word but often ignore the information of sentiment polarity.", "This drawback makes them hard to distinguish words with similar syntactic context but opposite sentiment polarity (e.g. good and bad ), resulting in unsatisfactory performance on sentiment analysis.", "Tang et al. (2014) learned word representations that encode both syntactic context and sentiment polarity by adding an objective to classify the polarity of an n -gram.", "This method can be generalized to the cross-lingual setting by training monolingual sentimental embeddings on both languages then aligning them in a common space.", "However, it requires sentiment resources in the target language thus is impractical for low-resource languages.", "There are also approaches to learn sentimental embeddings in the bilingual space without any sentiment resources in the target language.", "Barnes et al. (2018) jointly minimized an alignment objective based on a seed dictionary, and a classification objective based on the sentiment corpus.", "Its performance is compared to our method in Section 4. Xu and Wan (2017) learned multilingual sentimental embeddings by extending the BiSkip model (Luong et al., 2015).", "However, their method does not apply to pretrained embeddings and requires large-scale parallel corpora thus is not included in our experiments.", "This subsection first introduces the proposed mappings for aligning the monolingual embeddings in the bilingual space, then describes the general self-learning algorithm used to learn these bilingual mappings.", "The details of our algorithm are explained in Section 3.2 Section 3.6.", "We assume we have normalized monolingual embeddings S R v d and T R v d , where the i -th row of S is the vector representation of word i in the source language.", "The normalization procedure is as follows:", "(i) l 2 -normalize each vector", "(ii) cen-ter the vectors", "(iii) l 2 -normalize each vector again (Artetxe et al., 2018).", "Given these monolingual embeddings, existing BWE methods typically learn a projection matrix W R d d from the source vector space to the target vector space.", "However, these methods are unsuitable in our setting for two reasons:", "(i) most methods constrain W to be orthogonal or near-orthogonal, thus preserving distances between word vectors;", "(ii) word vectors in the target language space remain unchanged.", "These two properties prevent us from separating words with opposite sentiment polarity in the bilingual space.", "In this work, we propose to align the monolingual embeddings with two non-linear mappings: f s ( x ) = W s x (cid:107) W s x (cid:107) f t ( x ) = W t x (cid:107) W t x (cid:107) where (cid:107) (cid:107) denotes the l 2 -norm, W s ( W t ) is the projection matrix for the source (target) embeddings, and x is a d -dimension word vector.", "Each mapping can be seen as a linear projection followed by a re-normalization step.", "We propose the following convex domain D = { W R d d | (cid:107) W (cid:107) 2 r } as an alternative for the orthogonal constraint, where (cid:107) (cid:107) 2 denotes the spectral norm and r is a hyperparameter that determines to what extent we want to preserve word distances.", "This is inspired by the unit spectral norm constraint proposed by (Joulin et al., 2018).", "word pairs in the dictionary to have similar representations in the bilingual space.", "In the unsupervised case, such a dictionary can be induced from the monolingual embeddings S and T (Artetxe et al., 2018).", "However, the quality of this dictionary is usually not good, which in turn degrades the quality of the projection matrices learned from this dictionary.", "Previous work (Artetxe et al., 2017, 2018) showed that an iterative self-learning procedure can induce a good bilingual dictionary and hence good projection matrices.", "Given an initial dictionary D bi , this procedure iterates over two steps:", "(i) it aligns the monolingual embeddings in a common space based on D bi , yielding S (cid:48) and T (cid:48) ;", "(ii) it computes a new dictionary D bi using nearest neighbour retrieval over the approximately aligned embeddings S (cid:48) and T (cid:48) .", "In our method, there are three objects W s , W t and D bi to update through the self-learning procedure.", "Thus we iterates over the following three steps: 1. Solve W s by minimizing a sentiment-specific objective L s over D , as described in Section 3.3; 2. Solve W t by minimizing an alignment-specific objective L t over D , as described in Section 3.4; 3. Derive a new bilingual dictionary D bi based on S (cid:48) = SW (cid:62) s and T (cid:48) = TW (cid:62) t , as described in Section 3.5.", "The normalized embeddings S and T are not aligned along the first axis, i.e., the i -th row of S does not correspond to the i -th row of T .", "Therefore, an initial bilingual dictionary is required in order to access the correspondence between the two languages.", "Following (Artetxe et al., 2018), we first compute the similarity matrices M s = SS (cid:62) and M t = TT (cid:62) , sort them along the second axis and normalize the rows, yielding M (cid:48) s and M (cid:48) t .", "For each row in M (cid:48) s , we apply nearest neighbour retrieval over the rows of M (cid:48) t to find its corresponding translation, yielding a dictionary D s t = { (1 , T s t (1)) , (2 , T s t (2)) , . . . , ( v, T s t ( v )) } , where T s t ( i ) is the translation of the source word i .", "The same procedure is repeated in the other direction, yielding D t s .", "The two dictionaries are then concatenated to produce the initial bilingual dictionary D bi = D s t D t s .", "In order to incorporate the sentiment information into the bilingual word embeddings, we need a set of d -dimension vectors with known sentiment polarity.", "We propose a neural network based approach to learn these sentiment-specific vectors.", "Let the training corpus in the source language be C = { ( z 1 , y 1 ) , ( z 2 , y 2 ) , . . . , ( z |C| , y |C| ) } , where z i is a text and y i is its corresponding label.", "A d dimension vector with sentiment polarity y i can be obtained by calculating the weighted average of the word vectors in z i : h i = (cid:80) j z i exp( j ) S j (cid:80) j z i exp( j ) (1) where S j is the vector representation of the word j in the source language (corresponding to the j th row of S ) and j is a scalar that scores the importance of word j on the sentiment polarity.", "j is computed by j = max( AS j + b ) , where A R h d and b R h are the parameters to learn.", "This function can be seen as a convolution layer with h filters followed by a max pooling layer.", "The number of filters h is set to 4. Each h i is then forwarded to a linear classifier to predict the sentiment label y i .", "Once we have trained the model by minimizing the cross-entropy loss, we re-compute h i for each training example z i .", "We denote the set of vectors (i.e., h i ) with positive labels as P = { h p 1 , h p 2 , . . . , h p |P| } and the set of vectors with negative labels as N = { h n 1 , h n 2 , . . . , h n |N| } .", "In the 4-class setup, we have four sets: P , N , SP (the set of strongly positive vectors), SN (the set of strongly negative vectors).", "Given a set of positive d -dimension vectors P = { h p 1 , h p 2 , . . . , h p |P| } and a set of negative d dimension vectors N = { h n 1 , h n 2 , . . . , h n |N| } (or four sets in the 4-class setup), our goal is to distinguish the positive vectors from the negative vectors in the bilingual space, i.e., to separate W s h pi from W s h nj for any pair of i, j .", "We introduce a new d -dimension vector a p O = { x R d | (cid:107) x (cid:107) 1 } to represent the positive direction, which is to be learned.", "In order to separate positive vectors from negative vectors in the bilingual space, we try to make W s h pi ( i = 1 , . . . , |P| ) to be close to a p and W s h ni ( i = 1 , . . . , |N | ) to be distant from a p .", "For a given a p , we first compute a p (cid:62) W s h pi for i = 1 , 2 , . . . , |P| and denote the set of i with |P| smallest values as Q p + , where [0 , 1] is a hyperparameter 1 .", "These W s h pi are least similar with a p (dot product is used as the similarity metric), hence we maximize the average of a p (cid:62) W s h pi over Q p + .", "Likewise, we denote the set of i { 1 , 2 , . . . , |N |} with |N | largest values of a p (cid:62) W s h ni as Q p .", "These W s h ni are most similar to a p , hence we minimize the average of a p (cid:62) W s h ni over Q p .", "The overall objective is as follows: min W s D a p O L s ( W s , a p ) = L (cid:48) ( W s , a p , P , N ) = 1 |P| (cid:88) i Q p + a p (cid:62) W s h pi + 1 |N | (cid:88) i Q p a p (cid:62) W s h ni (2) where D is the convex set defined in Section 3.1.1.", "The rationale for this objective is that, instead of forcing every W s h pi to be close to a p , we only focus on a fraction of positive vectors that are most distant from a p , and vice versa for those negative vectors.", "We observe that this objective can be rewritten as: min W s D a p O L s ( W s , a p ) = 1 |P| max QS |P| ( |P| ) (cid:88) i Q a p (cid:62) W s h p i + 1 |N | max QS |N| ( |N| ) (cid:88) i Q a p (cid:62) W s h ni (3) where S |P| ( |P| ) represents all subsets of { 1 , 2 , . . . , |P|} of size |P| , and S |N| ( |N | ) is similarly defined.", "2 This formulation shows that 1 For simplicity, we assume |P| is already rounded to an integer.", "2 There is no need to introduce a new vector a n to represent the negative direction and introduce a new objective, since the new objective is exactly the same after replacing a p with a n .", "both terms of this objective can be seen as a maximum of linear functions of either W s or a p .", "Therefore, our objective is convex with respect to either W s or a p , thus can be efficiently minimized by using the projected gradient descent algorithm.", "We first minimize this objective with respect to a p over O , then minimize it with respect to W s over D .", "While this objective is useful in the binary setup, it does not separate a strongly positive vector in SP from a weakly positive vector in P (sim-ilarly for SN and N ).", "In order to achieve better performance in the 4-class setup, we adopt the one-versus-rest strategy to write L s as the sum of four terms: min W s D a p O a sp O a n O a sn O L s ( W s , a p , a sp , a n , a sn ) = L (cid:48) ( W s , a p , P , N SP SN ) + L (cid:48) ( W s , a sp , SP , P N SN ) + L (cid:48) ( W s , a n , N , P SP SN ) + L (cid:48) ( W s , a sn , SN , P SP N ) (4) where L (cid:48) is defined in", "Eq.(2) and a c is a d dimension vector representing the direction of class c .", "Based on the current bilingual dictionary D bi , we construct two sets of vectors { x s 1 , x s 2 , . . . , x s 2 v } and { x t 1 , x t 2 , . . . , x t 2 v } , where x si and x ti are the vector representations of the i -th word pair in D bi .", "With W s fixed, we can solve W t by minimizing: min W t D L t ( W t ) = 2 v (cid:88) i =1 (cid:107) W s x si W t x ti (cid:107) 2 (5) where D is the convex set defined in Section 3.1.1.", "This objective is convex with respect to W t , thus can be minimized efficiently by using the projected gradient descent algorithm.", "Once we have computed W s and W t , we can obtain the aligned embeddings S (cid:48) = SW (cid:62) s and T (cid:48) = TW (cid:62) t .", "Then we induce a new dictionary D bi using nearest neighbour retrieval over the rows of S (cid:48) and T (cid:48) .", "We perform the induction in two directions to produce D s t and D t s , then concatenate them to produce D bi .", "In this work, we propose a modified version of CSLS(Conneau et al., 2017) to be used as the similarity metric to preform nearest neighbour retrieval: CSLS (cid:48) ( x , y ) = x (cid:62) y 1 k (cid:88) y (cid:48) N Y ( x ) x (cid:62) y (cid:48) 1 k (cid:88) x (cid:48) N X ( y ) x (cid:48)(cid:62) y (6) where NY ( x ) is the set of k nearest neighbours of x in the set of vectors Y (in our case Y is the set of rows of T (cid:48) ).", "We set k to 10 following the original paper.", "As mentioned in Section 3.1.1, we introduce a hyperparameter r to define the convex domain D .", "There is a trade-off to make for r : a large r better incorporates sentimental similarity but significantly harms the quality of the alignment, while a small r constrains W s to be near-orthogonal thus prevents it to capture the sentimental similarity.", "In order to address this problem, we propose to first set r to 1 , letting the the monolingual embeddings to be properly aligned.", "Then r is iteratively increased by r , causing the positive vectors in the bilingual space to be gradually moved further away from the negative vectors.", "The training process stops when r reaches a maximum value r max , where r max is a hyperparameter 3 .", "The pseudo code of UBISE in the binary setup is shown in Algorithm 1. For the 4-class UBISE, lines 3,6,7 are replaced by their counterparts in the 4-class setup.", "We use the multilingual sentiment dataset provided by (Barnes et al., 2018).", "It contains annotated hotel reviews in English (EN), Spanish (ES), Catalan (CA) and Basque (EU).", "In our experiment, we use EN as the source language and ES, CA, EU as the target languages.", "For each target language, the dataset is divided into a target development set and a target test set.", "We also combine the strong and weak labels to produce a binary setup.", "3 Although having the number of iterations be implicitly defined by r max and r makes choosing a small r max impractical, it allows us to tune r max in a single training process.", "1: r 1 2: Initialize W s and W t to identity matrices 3: Learn P , N from S and C , according to Section 3.2.2 4: Compute the initial bilingual dictionary D bi from S and T , according to Section 3.2.1 5: while r r max do 6: a p argmin a p O L s ( a p , W s ) 7: W s argmin W s D L s ( a p , W s ) 8: W t argmin W t D L t ( W t ) 9: S (cid:48) SW (cid:62) s 10: T (cid:48) TW (cid:62) t 11: Derive a new bilingual dictionary D bi from S (cid:48) and T (cid:48) , according to Section 3.5 12: r r + r 13: end while Normalize the rows of S (cid:48) , T (cid:48) to unit length", "(Bojanowski et al., 2017) are used by all methods.", "The MUSE dataset(Conneau et al., 2017) is used by approaches that require bilingual supervision 4 .", "Each dictionary contains 5000 unique source words.", "We empirically set r = 0 .", "01 and v = 10000 .", "The vocabulary of each language is limited to the v most frequent words so that the embedding matrix has shape v d .", "Hyper parameters and r max are tuned on the target development set via a grid search.", "We apply stochastic dictionary induction by randomly setting the elements of the similarity matrix used for nearest neighbour retrieval to zero with probability 1 p , as described in (Artetxe et al., 2018).", "p is initialized to 0 .", "1 and increased by 0 .", "005 at each iteration.", "We empirically stop updating the dictionary when r exceeds 3 .", "We compare our method with the following baselines, including state-of-the-art BWE methods that are originally evaluated on the word translation task, as well as bilingual sentimental embed-4", "embed-4 This dataset does not contain a dictionary for EN-EU, thus we translate the EN-ES dictionary into EN-EU", "dings methods that are optimized for cross-lingual sentiment analysis.", "The bilingual word embeddings learned by each method are later evaluated on cross-lingual sentiment analysis using the same classifier for fairness.", "ADVERSARIAL Conneau et al. (2017) proposed an unsupervised BWE method based on adversarial training.", "After a near-orthogonal projection matrix is learned through adversarial training, a refinement procedure is applied to improve the quality of the alignment.", "VECMAP Artetxe et al. (2018) proposed an unsupervised BWE learning framework.", "It consists of an unsupervised dictionary initialization step and the self-learning procedure mentioned in Section 3.1.2.", "PROCRUSTES Artetxe et al. (2016) proposed a simple and effective supervised BWE method that requires a seed dictionary.", "It computes the optimal projection matrix by taking singular value decomposition (SVD).", "RCSLS Joulin et al. (2018) proposed an supervised BWE method that also requires a seed dictionary.", "They proposed a training objective that is consistent with the retieval criterion that can be minimized by using gradient descent.", "It achieves state-of-the-art results on the word translation task.", "BLSE Barnes et al. (2018) exploited both bilingual supervision and the sentiment corpus to learn bilingual sentimental embeddings.", "They jointly minimize an alignment-specific objective and a classification objective to learn the projection matrices.", "The trade-off between the two objectives is controlled by a hyperparameter [0 , 1] .", "We tune on the target development set as described in the original paper.", "Once the projection matrices have been learned, the classifier in this model is abandoned.", "The quality of the resulting BWE is evaluated using the classifier mentioned in Section 4.4.", "We use DAN (Iyyer et al., 2015) as the classifier to preform cross-lingual sentiment analysis.", "The loss of each instance is weighted by its inverse class frequency to address the class imbalance problem.", "For each method, the dropout rate is fixed at 0.3 and the l 2 -regularization strength is tuned on the target development set 5 .", "We train five classifiers for each method and report the average macro-F1 on the target test set.", "Table 1 presents the results of different BWE methods.", "UBISE outperforms all unsupervised methods on all six tasks and outperforms all baselines on four out of six tasks.", "All methods, especially unsupervised methods, suffer from distant language pairs, which is consistent with the observation of (Sgaard et al., 2018).", "VECMAP and ADVERSARIAL perform significantly worse on EN-EU compared to supervised methods.", "Yet, UBISE outperforms the strongest baseline by 2 .", "1% on EN-EU, indicating that incorporating sentiment is vital for crosslingual sentiment analysis on distant languages.", "Despite the similar performance across different BWE methods in the binary setup, UBISE outperform all baselines in the 4-class setup by a large margin (average of +2 . 2% ).", "This may indicate that the original monolingual embeddings are able to distinguish positive words from negative words(e.g., good and bad ), but bad at distinguishing strongly positive words from weakly positive words (e.g., good and perfect ).", "The performance of BLSE is merely comparative with other baselines.", "6 We suspect that this is due to the classifier we use to perform crosslingual sentiment analysis.", "The original paper used SVM or logistic regression to perform classification, in which case BLSE achieved better performance due to the utilization of sentiment information.", "But if we use a deeper neural network to perform cross-lingual sentiment analysis, preserving the original semantic similarity is more important.", "A qualitative comparison between BLSE and UBISE is presented in Section 4.8.", "We perform an ablation test to demonstrate the effect of the sentiment information provided by L s .", "5 The optimal regularization strength depends on the BWE method.", "Stronger regularization is favourable to BLSE and UBISE.", "6 We already obtain significantly better results after replacing the original classifier with DAN, compared with the original reported results.", "We create a new model UBISE MIN that does not utilize the sentiment information by eliminating lines 6,7,12 in Algorithm 1. UBISE MIN runs 500 iterations for every language pairs.", "The comparative results in Table 2 show that utilizing the sentiment information leads to an average improvement of +3 .", "1% in the binary setup or +4 .", "1% in the 4-class setup.", "Re-normalization is useful in the sense that it leads to better alignment by constraining all the bilingual vectors to be on the unit sphere.", "While this property does not matter for word translation as long as cosine-similarity is used as the retrieval criterion, it matters for cross-lingual sentiment analysis.", "Another effect of re-normalization is that it introduces non-linearity between the linear projection and the classifier, which is vital for separating words with opposite sentiment polarity.", "Without non-linearity the linear projection and the first layer of the classifier would collapse into a single linear projection, thus eliminating the effect of W s .", "Figure 2 illustrates how this nonlinearity helps separating positive words from negative words in the bilingual space.", "This effect is demonstrated in Section 4.8.", "To illustrate how UBISE transfers sentiment information from the source language to the target language, we visualize six categories of words in the bilingual space of UBISE and BLSE using t-SNE (Maaten and Hinton, 2008).", "As shown in Figure 1, both methods manage to separate positive words from negative words without any annotated data in Spanish.", "However, Barnes et al. (2018) abandon the original semantic similarity, which degrades its performance as shown in Section 4.5.", "In contrast, our method preserves semantic similarity by limiting the largest singular values of W s and W t to be smaller than r max .", "The trade-off between semantic similarity and sentimental similarity is made by choosing an appropriate r max .", "This paper presents a method to learn bilingual sentiment-specific word embeddings without any cross-lingual supervision.", "We propose a novel sentiment-specific objective that separates words Binary 4-class Bilingual Supervision Method ES CA EU ES CA EU 5 k dict.", "with opposite sentiment polarity in the bilingual space, and an alignment objective that enables the transfer of sentiment information from the source language to the target language.", "An iterative constraint relaxation procedure is applied to gradually incorporate the sentiment information into the bilingual word embeddings.", "We empirically evaluate our method on three language pairs for cross-lingual sentiment analysis and demonstrate its effectiveness.", "Experimental results show that incorporating sentiment information significantly improves the performance on fine-grained crosslingual sentiment analysis.", "We appreciate the anonymous reviewers for their helpful comments.", "Xiaojun Wan is the corresponding author." ]
[ "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "objective", "objective", "method", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "result", "objective", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain" ]
[ "NLP research is impeded by a lack of resources and awareness of the challenges presented by underrepresented languages and dialects.", "Focusing on the languages spoken in Indonesia, the second most linguistically diverse and the fourth most populous nation of the world, we provide an overview of the current state of NLP research for Indonesia's 700+ languages.", "We highlight challenges in Indonesian NLP and how these affect the performance of current NLP systems.", "Finally, we provide general recommendations to help develop NLP technology not only for languages of Indonesia but also other underrepresented languages.", "Research in natural language processing (NLP) has traditionally focused on developing models for English and a small set of other languages with large amounts of data (see Figure 1, bottom right).", "While the lack of data is generally cited as the key reason for the lack of progress in NLP for underrepresented languages (Hu et al., 2020; Joshi et al., 2020), we argue that another factor relates to the diversity and the lack of understanding of the linguistic characteristics of such languages.", "Through the lens of the languages spoken in Indonesia, the world's second-most linguistically diverse country, we seek to illustrate the challenges in applying NLP technology to such a diverse pool of languages.", "Indonesia is the 4th most populous nation globally, with 273 million people spread over 17,508 islands.", "There are more than 700 languages spoken in Indonesia, equal to 10% of the world's languages, second only to Papua New Guinea (Eber-hard et al., 2021).", "However, most of these languages are not well documented in the literature; many are not formally taught, and no established standard exists across speakers (Novitasari et al., These authors contributed equally.", "2020).", "Many of them are decreasing in use, as Indonesian ( Bahasa Indonesia ), the national language, is more frequently used as the primary language across the country.", "This process may ultimately result in a monolingual society (Cohn and Ravindranath, 2014).", "Among more than 700 Indonesian local languages, many are threatened.", "440 languages are listed as endangered and 12 as extinct according to data from Ethnologue (Eberhard et al., 2021) illustrated in Figure 2.", "Anindyatri and Mufidah (2020) found nearly half of a sample of 98 Indonesian local languages to be endangered while van Esch et al. (2022) observed 71 among 151 Indonesian local languages to have less than 100k speakers.", "Table 1 lists the names of the 10 most spoken local languages in Indonesia (Eberhard et al., 2021).", "Javanese and Sundanese are at the top with 84M and 34M speakers, respectively, while Madura, Minangkabau, and Buginese each have around 6M speakers.", "Despite their large speaker populations, these local languages are poorly represented in the NLP literature.", "Compared to Indonesian, the number of research papers mentioning these languages has barely increased over the past 20 years (Figure 1, top).", "Furthermore, compared to their European counterparts, Indonesian languages are drastically understudied (Figure 1, bottom).", "This is true even for Indonesian, which has nearly 200M speakers.", "Language technology should be accessible to everyone in their native languages (European Language Resources Association, 2019), including Indonesians.", "In the context of Indonesia, language technology research offers some benefits.", "First, language technology is a potential peacemaker tools in a multi-ethnic country, helping Indonesians understand each other better and avoid the ethnic con-flicts of the past (Bertrand, 2004).", "On a larger scale, language technology promotes language use (Euro-pean Language Resources Association, 2019) and helps language preservation.", "Despite these benefits, following Bird (2020), we recommend a careful assessment of individual usage scenarios of language technology, so they are implemented for the good of the local population.", "For language technology to be useful in the Indonesian context, it additionally has to account for the dialects of local languages.", "Language dialects in Indonesia are influenced by the geographical location and regional culture of their speakers (Van-der Klok, 2015) and thus often differ substantially in morphology and vocabulary, posing challenges for NLP systems.", "In this paper, we provide an overview of the current state of NLP for Indonesian and Indonesia's hundreds of languages.", "We then discuss the challenges presented by those languages and demonstrate how they affect state-of-the-art systems in NLP.", "We finally provide recommendations for developing better NLP technology not only for the languages in Indonesia but also for other underrepresented languages.", "Indonesia is one of the richest countries globally in terms of linguistic diversity.", "More than 400 of its languages belong to the Austronesian language family, while the others are Papuan languages spoken in the eastern part of the country.", "As shown in Figure 3, the Austronesian languages in Indonesia belong to three main groups: Western-Malayo-Polynesian (WMP), Central-Malayo-Polynesian (CMP), and South-Halmahera-West-New-Guinea (SHWNG) (Blust, 1980).", "WMP languages are Malay, Indonesian, Javanese, Sundanese, Balinese, and Minangkabau, among others.", "All languages mentioned in Table 1 are in this group.", "Lan-7227 Figure 3: Map of Austronesian and Papuan languages in Indonesia.", "guages belonging to CMP are languages of the Lesser Sunda Islands from East Sumbawa (with Bimanese) onwards to the east, and languages of the central and southern Moluccas (including the Aru Islands and the Sula Archipelago).", "The SHWNG group consists of languages of Halmahera and Cenderawasih Bay, and further-flung regions such as the Mamberamo River and the Raja Ampat Islands.", "Meanwhile, the Papuan languages are mainly spoken in Papua, such as Dani, Asmat, Maybrat, and Sentani.", "Some Papuan languages are also spoken in Halmahera, Timor, and the Alor Archipelago (Palmer, 2018; Ross, 2005).", "Most Austronesian linguists and archaeologists agree that the original homeland' of Austronesian languages must be sought in Taiwan and, prior to Taiwan, in coastal South China (Adelaar, 2005; Bellwood et al., 2011).", "In the second millennium CE, the Austronesian people moved from Taiwan to the Philippines.", "From the Philippines, they moved southward to Borneo and Sulawesi.", "From Borneo, they migrated to Sumatra, the Malay Peninsula, Java, and even to Madagascar.", "From Sulawesi, they moved southward to the CMP area and eastward to the SHWNG area.", "From there, they migrated to Oceania and Polynesia, as far as New Zealand, Easter Island, and Hawaii (Gray and Jordan, 2000).", "The people that lived in insular Southeast Asia, such as in the Philippines and Indonesia, before the arrival of Austronesians were Australo-Melanesians (Bellwood, 1997).", "Gradual assimilation with Austronesians occurred, although some pre-Austronesian groups still survive, such as Melanesian people in eastern Indonesia (Ross, 2005; Coupe and Kratochvl, 2020).", "franca) of interethnic communication in Southeast Asia and beyond (Steinhauer, 2005; Coupe and Kratochvl, 2020).", "It functioned as the language of trade and the language of Islam because Muslim merchants from India and the Middle East were the first to introduce the religion into the harbor towns of Indonesia.", "After the arrival of Europeans, Malay was used by the Portuguese and Dutch to spread Catholicism and Protestantism.", "When the Dutch extended their rule over areas outside Java in the nineteenth century, the importance of Malay increased, and thus, the first standardization of the spelling and grammar occurred in 1901, based on Classical Malay (Abas, 1987; Sneddon, 2003).", "In 1928, the Second National Youth Congress participants proclaimed Malay (henceforth called Indonesian) as the unifying language of Indonesia.", "During World War II, the Japanese occupying forces forbade all use of Dutch in favor of Indonesian, which from then onward effectively became the new national language.", "From independence until the present, Indonesian has functioned as the primary language in education, mass media, and government activities.", "Many local language speakers are increasingly using Indonesian with their children because they believe it will aid them to attain a better education and career (Klamer, 2018).", "Recently, pretrained multilingual language models such as mBERT (Devlin et al., 2019), mBART (Liu et al., 2020), and mT5 (Xue et al., 2021b) have been proposed.", "Their coverage, however, focuses on high-resource languages.", "Only mBERT and mT5 include Indonesian local languages, i.e., Javanese, Sundanese, and Minangkabau, but with comparatively little pretraining data.", "Some multilingual datasets for question answering (TyDiQA; Clark et al., 2020), common sense reasoning (XCOPA; Ponti et al., 2020), abstractive summarization (Hasan et al., 2021), passage ranking (mMARCO; Bonifacio et al., 2021), cross-lingual visual question answering (xGQA; Pfeif-fer et al., 2021), language and vision reasoning (MaRVL; Liu et al., 2021), paraphrasing (Para-Cotta; Aji et al., 2021), dialogue systems (XPer-sona & BiToD; Lin et al., 2021a,b), lexical normalization (MultiLexNorm; van der Goot et al., 2021), and machine translation (FLORES-101; Guzmn et al., 2019) include Indonesian but most others do not, and very few include Indonesian local lan-7228 guages.", "An exception is the weakly supervised named entity recognition dataset, WikiAnn (Pan et al., 2017), which covers several Indonesian local languages, namely Acehnese, Javanese, Minangkabau, and Sundanese.", "Parallel corpora including Indonesian local languages are:", "(i) CommonCrawl;", "(ii) Wikipedia parallel corpora like MediaWiki Translations; 1 and WikiMatrix (Schwenk et al., 2021)", "(iii) the Leipzig corpora (Goldhahn et al., 2012), which include Indonesian, Javanese, Sundanese, Minangkabau, Madurese, Acehnese, Buginese, Banjar, and Balinese; and", "(iv) JW-300 (Agic and Vulic, 2019), which includes dozens of Indonesian local languages, e.g., Batak language groups, Javanese, Dayak language groups, and several languages in Nusa Tenggara.", "Recent studies, however, have raised concerns regarding the quality of such multilingual corpora for underrepresented languages (Caswell et al., 2022).", "NLP research on Indonesian has occurred across multiple topics, such as POS tagging (Wicaksono and Purwarianti, 2010; Dinakaramani et al., 2014), NER (Budi et al., 2005; Rachman et al., 2017; Gunawan et al., 2018), sentiment analysis (Narad-hipa and Purwarianti, 2011; Lunando and Purwarianti, 2013; Wicaksono et al., 2014), hate speech detection (Alfina et al., 2017; Sutejo and Lestari, 2018), topic classification (Winata and Khodra, 2015; Kusumaningrum et al., 2016), question answering (Mahendra et al., 2008; Fikri and Purwarianti, 2012), machine translation (Yulianti et al., 2011; Simon and Purwarianti, 2013; Hermanto et al., 2015), keyphrases extraction (Saputra et al., 2018; Trisna and Nurwidyantoro, 2020), morphological analysis (Pisceldo et al., 2008), and speech recognition (Lestari et al., 2006; Baskoro and Adriani, 2008; Zahra et al., 2009).", "However, many of these studies either did not release the data or used non-standardized resources with a lack of documentation and open source code, making them extremely difficult to reproduce.", "Recently, Wilie et al. (2020), Koto et al. (2020b, 2021), and Cahyawijaya et al. (2021) collected Indonesian NLP resources as benchmark data.", "Others have also begun to create standardized labeled data for Indonesian NLP, e.g. the works of Kurniawan and Aji (2018), Guntara et al. (2020), Koto et al. 1 https://mediawiki.org/wiki/Content_translation (2020a), Khairunnisa et al. (2020), and Mahendra et al. (2021).", "On the other hand, there has been very little work on local languages.", "Several works studied stemming (Sundanese (Suryani et al., 2018); Balinese (Subali and Fatichah, 2019)) and POS Tagging (Madurese; Dewi et al., 2020).", "Koto and Koto (2020) built a Indonesian Minangkabau parallel corpus and also sentiment analysis resources for Minangkabau.", "Other works developed machine translation systems between Indonesian and local languages, e.g., Sundanese (Suryani et al., 2015), Buginese (Apriani et al., 2016), Dayak Kanayatn (Hasbiansyah et al., 2016), and Sambas Malay (Ningtyas et al., 2018).", "Tanaya and Adriani (2016, 2018) studied Javanese character segmentation in non-Latin script.", "Safitri et al. (2016) worked on spoken data language identification in Minangkabau, Sundanese, and Javanese, while Azizah et al. (2020) developed end-to-end neural text-to-speech models for Indonesian, Sundanese, and Javanese.", "Nasution et al. (2017, 2021) proposed an approach for bilingual lexicon induction and evaluated the approach on seven languages, i.e., Indonesian, Malay, Minangkabau, Palembang Malay, Banjar, Javanese, and Sundanese.", "Cahyawijaya et al. (2021) established a machine translation benchmark in Sundanese and Javanese using Bible data.", "Wibowo et al. (2021) studied a family of colloquial Indonesian, which is influenced by some local languages via morphological transformation, and Putri et al. (2021) worked on abusive language and hate speech detection on Twitter for five local languages, namely Javanese, Sundanese, Madurese, Minangkabau, and Musi.", "Monolingual Data Unlabeled corpora are crucial for building large language models, such as GPT-2 (Radford et al., 2019) or BERT (Devlin et al., 2019).", "Available unlabeled corpora such as Indo4B (Wilie et al., 2020), and Indo4B-Plus (Cahyawijaya et al., 2021) mainly include data in Indonesian, with the latter containing 10% of data in Javanese and Sundanese.", "In comparison, in multilingual corpora such as CC100 (Conneau et al., 2020), Javanese and Sundanese data accounts for only 0.001% and 0.002% of the corpus size respectively while in mC4 (Xue 7229 # speakers (Million) W i k i ped i a s i z e ( GB ) 0.0001 0.001 0.01 0.1 1 10 0.001 0.01 0.1 1 10 100 1000 Europe Asia Indonesia Figure 4: Relationship between the number of speakers and the size of data in Wikipedia for languages spoken in Europe, Asia, and Indonesia.", "et al., 2021b), there are only 0.6M Javanese and 0.3M Sundanese tokens out of a total of 6.3T tokens.", "In addition, we measure data availability in Wikipedia compared to the number of speakers in Figure", "4. 2 Much less data is available for the languages spoken in Indonesia, compared to European languages with similar numbers of speakers.", "For example, Wikipedia contains more than 3 GB of Italian articles but less than 50 MB of Javanese articles, despite both languages having a comparable number of speakers.", "Similarly, Sundanese has less than 25 MB of articles, whereas languages with comparable numbers of speakers have more than 1.5 GB of articles.", "Similar trends hold for most other Asian languages.", "Languages in Africa are even more underrepresented in terms of Wikipedia data (see Appendix B).", "Beyond the highly spoken local languages, most other Indonesian local languages do not have Wikipedia instances, in contrast to European languages with few speakers.", "It is very difficult to find alternative sources for high-quality text data for other local languages of Indonesia (such as news websites), as most such sources are written in Indonesian.", "Resources in long-tail languages are even more scarce due to a very low number of speakers.", "Moreover, most of the languages in the long tail are mainly used in a spoken context, making text data challenging to obtain.", "These statistics demonstrate 2 The number of speakers is collected from Wiki-data (Vrandecic and Krtzsch, 2014), from the number of speakers (P1098) property as of Nov 7th, 2021, while the size is collected from the 20211101 Wikipedia dump.", "that collecting unlabeled corpora for Indonesian local languages is extremely difficult.", "This makes it impractical to develop strong pretrained language models for these languages, which have been the foundation for many recent NLP systems.", "Labeled Data Most work on Indonesian NLP (see 2) has not publicly released the data or models, limiting reproducibility.", "Although recent Indonesian NLP benchmarks are addressing this issue, they mostly focus on the Indonesian language (see Appendix F).", "Some widely spoken local languages such as Javanese, Sundanese, or Minangkabau have extremely small labeled datasets compared to Indonesian, while others have barely any.", "The lack of such datasets makes NLP development for the local languages difficult.", "However, constructing new labeled datasets is still challenging due to: (1) the lack of speakers of some languages; (2) the vast continuum of dialectical variation (see 3.2.1); and (3) the absence of writing standard in most local languages (see 3.3).", "The diversity of Indonesian languages is not only reflected in the large number of local languages but also the large number of dialects of these languages (3.2.1).", "Speakers of local languages also often mix languages in conversation, which makes colloquial Indonesian more diverse (3.2.2).", "In addition, some local languages are more commonly used in conversational contexts, so they do not have consistent writing forms in written media (3.3).", "Indonesian local languages often have multiple dialects, depending on the geographical location.", "Local languages of Indonesian spoken in different locations might be different (have some lexical variation) to one another, despite still being categorized as the same language (Fauzi and Puspitorini, 2018).", "For example, Anderbeck (2008) showed that villages across the Jambi province use different dialects of Jambi Malay.", "Similarly, Kartikasari et al. (2018) showed that Javanese between different cities in central and eastern Java could have more than 50% lexical variation, while Purwan-ingsih (2017) showed that Javanese in different districts in the Lamongan has up to 13% lexical variation.", "Similar studies have been conducted on other languages, such as Balinese (Maharani and Candra, 2018) and Sasak (Sarwadi et al., 2019).", "Moreover, Indonesian and its local languages have multiple styles, even within the same dialect.", "One factor that affects style is the level of politeness and formalitysimilar to Japanese and other Asian languages (Bond and Baldwin, 2016).", "More polite language is used when speaking to a person with a higher social position, especially to elders, seniors, and sometimes strangers.", "Different politeness levels manifest in the use of different honorifics and even different lexical terms.", "To illustrate the distinctions between regional dialects and styles, we highlight common words and utterances across dialects and styles in Jambi Malay and Javanese in Tables 2 and 3 respectively.", "For Jambi Malay, we sample the result from a prior work (Anderbeck, 2008).", "For Javanese, we ask native speakers to translate basic words into three regional dialects: Western, Central, and Eastern Javanese, and two different styles: Ngoko (stan-dard, daily-use Javanese) and Krama (polite Javanese, used to communicate to elders and those with higher social status).", "However, since contemporary Krama Javanese is not very different among regions, we only consider Krama from the Eastern speakers' perspective.", "Jambi Malay has many dialects across villages.", "As shown in Table 2, many common words are spoken differently across dialects and styles.", "Similarly, Javanese is also different across regions.", "Not every Javanese speaker understands Krama , since its usage is very limited.", "Moreover, the number of Javanese speakers who can use Krama is declin-Model Style Region langid.py FastText CLD3 Top-1 Top-3 Top-1 Top-3 Top-1 Ngoko Western 0.241 0.621 0.069 0.379 0.759 Ngoko Central 0.345 0.690 0.379 0.724 0.828 Ngoko Eastern 0.276 0.552 0.103 0.379 0.552 Krama Eastern 0.345 0.759 0.379 0.586 0.897 Table 4: Language identification accuracy based on different Javanese dialects and styles.", "Dialectical and style differences pose a challenge to NLP systems.", "To explore the extent of this challenge, we conduct an experiment to test the robustness of NLP systems to variations in Javanese dialects.", "We ask native speakers 4 to translate 29 simple sentences into Javanese according to the specified dialect and style.", "We then evaluate several language identification systems on those instances.", "Language identification is a core part of multilingual NLP and a necessary step for collect-3 Krama is used to speak formally (e.g., with older or respected people).", "However, people prefer to use Indonesian more in a formal situation.", "People who move from sub-urban areas to bigger cities tend to continue to use Ngoko and thus also pass Ngoko on to their children.", "4 Our annotators are based in Banyumas, Jogjakarta, and Jember for Western, Central, and Eastern Javanese respectively.", "Using dialects from different cities might yield different results.", "ing textual data in a language.", "Despite its importance, it is an open research area, particularly for underrepresented languages (Hughes et al., 2006, Caswell et al., 2022).", "We compare langid.py (Lui and Baldwin, 2012), FastText (Joulin et al., 2017), and CLD3.", "5 The results can be seen in Table", "4. In general, the language identification systems are more accurate in detecting Javanese texts in the Ngoko -Central dialect, or Krama , since the systems were trained on Javanese Wikipedia data, which is written in either the Ngoko -Central or Krama dialects and styles.", "If an NLP system can only detect certain dialects, then this information should be conveyed explicitly.", "Problems arise if we assume that the model works equally well across dialects.", "For example, in the case of language identification, if we use the model to collect datasets automatically, then Javanese datasets with poor-performing dialects will be underrepresented in the data.", "Code-mixing is an occurrence where a person speaks alternately in two or more languages in a conversation (Sitaram et al., 2019, Winata et al., 2018, 2019b, Dogruz et al., 2021).", "This phenomenon is common in Indonesian conversations (Barik et al., 2019, Johanes et al., 2020, Wibowo et al., 2021).", "In a conversational context, people sometimes mix their local languages with standard Indonesian, resulting in colloquial Indonesian (Siregar et al., 2014).", "This colloquial-style Indonesian is used daily in speech and conversation and is common on social media (Sutrisno and Ariesta, 2019).", "Some frequently used code-mixed words (especially on social media) are even intelligible to people that do not speak the original local languages.", "Interestingly, code-mixing can also occur in border areas where people are exposed to 5 https://github.com/google/cld3 Language Meaning Written Variation IPA Javanese what apa / opo /OpO/ (Eastern there is ana / ono / onok /OnOP/ Ngoko ) you kon / koen /kOn/ Balinese yes inggih / nggih /PNgih/ (Alus I / me tiang / tyang /tiaN/ Singgih ) <greeting> swastyastu / swastiastu /swastiastu/ Sundanese please / sorry punten / punteun /punt@n/ (Badui red beureum / berem /b@r1m/ Loma ) salivating ngacai / ngacay /NacaI/ Table 6: Written form variations in several local languages, confirmed by native speakers.", "multiple languages, therefore mixing them together.", "For example, people in Jember (a regency district in East Java) combine Javanese and Madurese in their daily conversation (Haryono, 2012).", "Indonesian code-mixing not only occurs at the word level but also at the morpheme level (Winata, 2021).", "For example, quotenya (his/her quote, see Table 5) combines the English word quote and the Indonesian suffix -nya , which denotes possession; similarly, ngetag combines the Betawinese prefix ngeand the English word tag .", "More examples can be found in Table", "5. 3.3 Orthography Variation Many Indonesian local languages are mainly used in spoken settings and have no established standard orthography system.", "Some local languages do originally have their own archaic writing systems that derive from the Jawi alphabet or Kawi script, and even though standard transliteration into the Roman alphabet exists for some (e.g., Javanese and Sundanese), they are not widely known and practiced (Soeparno, 2015).", "Hence, some words have multiple romanized orthographies that are mutually intelligible, as they are pronounced the same.", "Some examples can be seen in Table", "6. Such a variety of written forms is common in local languages in Indonesia.", "This variation leads to a significantly larger vocabulary size, especially for NLP systems that use word-based representations, and presents a challenge to constrain the representations for different spellings of the same word to be similar.", "Language evolves together with the speakers.", "A more widely used language may have a larger digital presence, which fosters a more written form of communication, while languages that are used only 7232 within small communities may emphasize the spoken form.", "Some languages are also declining, and speakers may prefer to use Indonesian rather than their local language.", "In contrast, there are isolated residents that use the local language daily and are less proficient in Indonesian (Nurjanah et al., 2018, Jahang and Meirina, 2021).", "These variations give rise to different requirements, and there is no single solution for all.", "Technology and education are not well-distributed within the nation.", "Internet penetration in Indonesia is 73.7% in 2020 but is mainly concentrated on Java.", "Among the non-Internet users, 39% explain that they do not understand the technology, while 15% state that they do not have a device to access the Internet.", "6 In some areas where the Internet is not seen as a basic need, imposing NLP technology on them may not necessarily be relevant.", "At the same time, general NLP development within the nation faces difficulties due to the lack of funding, especially in universities outside of Java.", "GPU servers are still scarce, even in top universities in the country.", "7 The dynamics of population movement in Indonesia also need to be taken into consideration.", "For example, urban communities transmigrate to remote areas for social purposes, such as teaching or becoming doctors for underdeveloped villages.", "Each of these situations might call for various new NLP technologies to be developed to facilitate better communication.", "Based on the challenges for Indonesian NLP highlighted in the previous section, we formulate proposals for improving the state of Indonesian NLP research, as well as of other underrepresented languages.", "Our proposals cover several aspects including metadata documentation; potential research directions; and engagement with communities.", "In line with studies promoting proper data documentation for NLP research (Bender and Friedman, 2018, Rogers et al., 2021, Alyafeai et al., 2021, McMillan-Major et al., 2022), we recommend the following considerations.", "Regional Dialect Metadata We have shown that a local language can have large variation depending on the region and the dialect.", "Therefore, we suggest adding regional dialect metadata to NLP datasets and models, not only for Indonesian but also for other languages.", "This is particularly important for languages with large dialectical differences.", "It also helps to clearly communicate NLP capabilities to stakeholders and end-users as it will help set an expectation of what dialects the systems can handle.", "Additionally, regional metadata can indirectly inform topics present in the data, especially for crawled data sources.", "Style and Register Metadata Similarly, we also suggest adding style and register metadata.", "This metadata can capture the politeness level of the text, not only for Indonesian but also in other languages.", "In addition, this metadata can be used to document the formality level of the text, so it may be useful for research on modeling style or style transfer.", "Among the most spoken local languages, a lot of research has been done on mainstream NLP tasks such as hate-speech detection, sentiment analysis, entity recognition, and machine translation.", "Some research has even been deployed in production by industry.", "Many of the languages, however, are not widely spoken and under-explored.", "Focusing on these languages, we suggest future research direction as follows.", "Data-Efficient NLP Pretrained language models, which have taken the NLP world by storm, require an abundance of monolingual data.", "However, data collection has been a long-standing problem for low-resource languages.", "Therefore, we recommend more exploration into designing data-efficient approaches such as adaptation methods (Artetxe et al., 2020, Aji et al., 2020, Guru-rangan et al., 2020, Koto et al., 2021, Kurniawan et al., 2021), few-shot learning (Winata et al., 2021, Madotto et al., 2021, Le Scao and Rush, 2021), and learning from related languages (Khanuja et al., 2021, Khemchandani et al., 2021).", "The goal of these methods is effective resource utilization, that is, to minimize the financial costs for computation and data collection as advocated by Schwartz et al. (2020), Cahyawijaya (2021), and Nityasya et al. (2021).", "Data Collection Data collection efforts need to be commenced as soon as possible, despite all the challenges (3.1).", "Here, we suggest collecting parallel data between Indonesian and each of the local languages for several reasons.", "First, a lot of Indonesians are bilingual (Koto and Koto, 2020), that is, they speak both Indonesian and their local language, which facilitates data collection.", "Moreover, the fact that the local languages have some vocabulary overlap with Indonesian (see Table 7 in the Appendix) might help facilitate building translation systems with relatively little parallel data (Nguyen and Chiang, 2017).", "Finally, having such parallel data, we can build translation systems for synthetic data generation.", "In line with this approach, the effectiveness of models trained on synthetic translated data can be explored.", "Compute-Efficient NLP The costly GPU requirement for current NLP models hinders adoption by local research institutions and industries.", "Instead of focusing on building yet another massive model, we suggest focusing on developing lightweight and fast neural architectures, for example through distillation (Kim et al., 2019; Sanh et al., 2019; Jiao et al., 2020), model factorization (Winata et al., 2019a) or model pruning (Voita et al., 2019).", "We also recommend research on more efficient training mechanisms (Aji and Heafield, 2017; Diskin et al., 2021).", "In addition, non-neural methods are still quite popular in Indonesia.", "Therefore, further research on the trade-off between the efficiency and quality of the models is also an interesting research direction.", "Robustness to Code-mixing and Non-Standard Orthography Languages in Indonesia are prone to variations due to code-mixing and non-standard orthography, which occurs on the morpheme or even grapheme level.", "Models that are applied to Indonesian code-mixed data need to be able to learn morphologically faithful representations.", "Therefore, we recommend more explorations on methods derived from subword tokenization (Gage, 1994; Kudo, 2018) and token-free models (Gillick et al., 2016; Tay et al., 2021; Xue et al., 2021a) to deal with this problem.", "This problem is also explored by Tan and Joty (2021) in an adversarial setting.", "NLP Beyond Text For many Indonesian local languages that are rarely if ever written, speech is a more natural communication format.", "We thus recommend more attention on less text-focused research, such as spoken language understanding (Chung et al., 2021; Serdyuk et al., 2018), speech recognition (Besacier et al., 2014; Winata et al., 2020a,b), and multimodality (Dai et al., 2020, 2021) in order to improve NLP for such languages.", "8 https://inacl.id/inacl/ 9 http://polyglotindonesia.org 10 https://merajutindonesia.id/ 11 https://www.mlindonesia.org/ 12 https://bigscience.huggingface.co/ 13 https://blog.iclr.cc/2021/08/10/ broadening-our-call-for-participation-to-iclr-2022/ 7234 References Husen Abas.", "As discussed in 3.4, it is difficult to generalize a solution across local languages.", "We thus encourage the NLP community, such as the Indonesian Association of Computational Linguistics (INACL) 8 to work more closely with native speakers and local communities.", "Local communities who work on linguistics such as Polyglot Indonesia , 9 Merajut Indonesia , 10 and Masyarakat Linguistik Indonesia 11 would be relevant collaborators to provide solutions and resources that support use cases benefiting the native speakers and communities of underrepresented languages.", "We advise the involvement of linguists, for example, to aid the language documentation process (Anastasopoulos et al., 2020).", "We also support open-science movements such as BigScience 12 or ICLR CoSubmitting Summer 13 to help start collaborations and reduce the barrier to entry to NLP research.", "In this paper, we highlight challenges in Indonesian NLP.", "Indonesia is one of the most populous countries and the second-most linguistically diverse country of the world, with over 700 local languages, yet Indonesian NLP is underrepresented and under-explored.", "Based on the observed challenges, we also present recommendations to improve the situation, not only for Indonesian but also for other underrepresented languages.", "We are grateful to Alexander Gutkin and Ling-Yen Liao for valuable feedback on an earlier draft of this paper.", "We also thank the anonymous reviewers for their helpful feedback.", "Lastly, we acknowledge the support of Kata.ai in this work." ]
[ "abstain", "method", "result", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "result", "abstain", "result", "other", "other", "other" ]
[ "Neural Machine Translation (NMT) models are sensitive to small perturbations in the input.", "Robustness to such perturbations is typically measured using translation quality metrics such as BLEU on the noisy input.", "This paper proposes additional metrics which measure the relative degradation and changes in translation when small perturbations are added to the input.", "We focus on a class of models employing subword regularization to address robustness and perform extensive evaluations of these models using the robustness measures proposed.", "Results show that our proposed metrics reveal a clear trend of improved robustness to perturbations when subword regularization methods are used.", "Recent work has pointed out the challenges in building robust neural network models (Goodfel-low et al., 2015; Papernot et al., 2016).", "For Neural Machine Translation (NMT) in particular, it has been shown that NMT models are brittle to small perturbations in the input, both when these perturbations are synthetically created or generated to mimic real data noise (Belinkov and Bisk, 2018).", "Consider the example in Table 1 where an NMT model generates a worse translation as a consequence of only one character changing in the input.", "Improving robustness in NMT has received a lot of attention lately with data augmentation (Sper-ber et al., 2017; Belinkov and Bisk, 2018; Vaibhav et al., 2019; Liu et al., 2019; Karpukhin et al., 2019) and adversarial training methods (Cheng et al., 2018; Ebrahimi et al., 2018; Cheng et al., 2019; Michel et al., 2019) as some of the more popular approaches used to increase robustness in neural network models.", "In this paper, we focus on one class of methods, subword regularization, which addresses NMT Original input Se kylla tuntuu sangen luultavalta.", "robustness without introducing any changes to the architectures or to the training regime, solely through dynamic segmentation of input into subwords (Kudo, 2018; Provilkov et al., 2019).", "We provide a comprehensive comparison of these methods on several language pairs and under different noise conditions on robustness-focused metrics.", "Previous work has used translation quality measures such as BLEU on noisy input as an indicator of robustness.", "Absolute model performance on noisy input is important, and we believe this is an appropriate measure for noisy domain evaluation (Michel and Neubig, 2018; Berard et al., 2019; Li et al., 2019).", "However, it does not disentangle model quality from the relative degradation under added noise.", "For this reason, we propose two additional measures for robustness which quantify the changes in translation when perturbations are added to the input.", "The first one measures relative changes in translation quality while the second one focuses on consistency in translation output irrespective of reference translations.", "Unlike the use of BLEU scores alone, the metrics introduced show clearer trends across all languages tested: NMT models are more robust to perturbations when subword regularization is employed.", "We also show that for the models used, changes in output strongly correlate with decreased quality and the consistency measure alone can be used as a robustness proxy in the absence of reference data.", "Robustness is usually measured with respect to translation quality.", "Suppose an NMT model M translates input x to y (cid:48) and translates its perturbed version x to y (cid:48) , the translation quality (TQ) on these datasets is measured against reference translations y : TQ ( y (cid:48) , y ) and TQ ( y (cid:48) , y ) .", "TQ can be implemented as any quality measurement metric, such as BLEU (Papineni et al., 2002) or 1 minus TER (Snover et al., 2006).", "Previous work has used TQ on perturbed or noisy input as an indicator of robustness.", "However, we argue that assessing models' performance relative to that of the original dataset is important as well in order to capture models' sensitivity to perturbations.", "Consider the following hypothetical example: M 1 : BLEU ( y (cid:48) 1 , y ) = 40 , BLEU ( y (cid:48) 1 , y ) = 38; M 2 : BLEU ( y (cid:48) 2 , y ) = 37 , BLEU ( y (cid:48) 2 , y ) = 37 .", "Selecting M 1 to translate noisy data alone is preferable, since M 1 outperforms M 2 ( 38 > 37 ).", "However, M 1 's quality degradation ( 40 38 ) reflects that it is in fact more sensitive to perturbation comparing with M 2 .", "To this end, we use the ratio between TQ ( y (cid:48) , y ) and TQ ( y (cid:48) , y ) to quantify an NMT model M 's invariance to specific data and perturbation, and define it as robustness : ROBUST ( M | x, y, ) = TQ ( y (cid:48) , y ) TQ ( y (cid:48) , y ) .", "When evaluating on the dataset ( x, y ) , ROBUST ( M | x, y, ) < 1 means the translation quality of M is degraded under perturbation ; ROBUST ( M | x, y, ) = 1 indicates that M is robust to perturbation .", "It is worth noting that: (1) ROBUST can be viewed as the normalized TQ = TQ ( y (cid:48) , y ) TQ ( y (cid:48) , y ) because TQ / TQ ( y (cid:48) , y ) = 1 ROBUST .", "We opt for the ratio definition because it is on a [0 , 1] scale, and it is easier to interpret than TQ since the latter needs to be interpreted in the context of the TQ score.", "(2) High robustness can only be expected under low levels of noise, as it is not realistic for a model to recover from extreme perturbations.", "Evaluation without References Reference translations are not readily available in some cases, such as when evaluating on a new domain.", "Inspired by unsupervised consistency training (Xie et al., 2019), we test if translation consistency can be used to estimate robustness against noise perturbations.", "Specifically, a model is consistent under a perturbation if the two translations, y (cid:48) and y (cid:48) are similar to each other.", "Note that consistency is sufficient but not necessary for robustness: a good translation can be expressed in diverse ways, which leads to high robustness but low consistency.", "Sim can be any symmetric measure of similarity, and in this paper we opt for Sim ( y (cid:48) , y (cid:48) ) to be the harmonic mean of TQ ( y (cid:48) , y (cid:48) ) and TQ ( y (cid:48) , y (cid:48) ) , where TQ is BLEU between two outputs.", "We run several experiments across different language families with varying difficulties, across different training data conditions (i.e. with different training data sizes) and evaluate how different subword segmentation strategies performs across noisy domains and noise types.", "Implementation Details We build NMT models with the Transformer-base architecture (Vaswani et al., 2017) implemented in the Sockeye toolkit (Hieber et al., 2017).", "The target embeddings and the output layer's weight matrix are tied (Press and Wolf, 2017).", "Training is done on 2 GPUs, with a batch size of 3072 tokens and we checkpoint the model every 4000 updates.", "The learning rate is initialized to 0.0002 and reduced by 10% after 4 checkpoints without improvement of perplexity on the development set.", "Training stops after 10 checkpoints without improvement.", "Tasks and Data We train NMT models on eight translation directions and measure robustness and consistency for them.", "EN DE and EN FI models are trained with pre-processed WMT18 news data and tested with the latest news test sets (new-stest2019).", "Recently, two datasets were built from user-generated content, MTNT (Michel and Neubig, 2018) and 4SQ (Berard et al., 2019).", "They provide naturally occurring noisy inputs and translations for EN FR and EN JA , thus enabling automatic evaluations.", "EN JA baseline models are trained and also tested with aggregated data provided by MTNT, i.e., KFTT+TED+JESC (KTJ).", "EN FR Languages # sentences # EN tokens EN DE 29.3 M 591 M BASE EN FR 22.2 M 437 M EN FI 2.9 M 71 M EN JA 3.9 M 43 M EN FR 36.1 K 1,011 K MTNT FR EN 19.2 K 779 K EN JA 5.8 K 338 K JA EN 6.5 K 156 K 4SQ FR EN 12.1 K 141 K Table 2: Statistics of various training data sets.", "baseline models are trained with aggregated data of Europarl-v7 (Koehn, 2005), NewsCommentary-v14 (Bojar et al., 2018), OpenSubtitles-v2018 (Li-son and Tiedemann, 2016), and ParaCrawl-v5 1 , which simulates the UGC training corpus used in 4SQ benchmarks, and they are tested with the latest WMT new test sets supporting EN FR (new-stest2014).", "Following the convention, we also evaluate models directly on noisy MTNT (mtnt2019) and 4SQ test sets.", "We fine-tune baseline models with corresponding MTNT/4SQ training data, inheriting all hyper-parameters except the checkpoint interval which is re-set to 100 updates.", "Table 2 shows itemized training data statistics after pre-processing.", "Perturbations We investigate two frequently used types of perturbations and apply them to WMT and KTJ test data.", "The first is synthetic misspelling: each word is misspelled with probability of 0.1, and the strategy is randomly chosen from single-character deletion, insertion, and substitution (Karpukhin et al., 2019).", "The second perturbation is letter case changing: each sentence is modified with probability of 0.5, and the strategy is randomly chosen from upper-casing all letters, lower-casing all letters, and title-casing all words (Berard et al., 2019).", "2 Since we change the letter case in the test data, we always report case-insensitive BLEU with 13a' tokenization using sacreBLEU (Post, 2018).", "Japanese output is pre-segmented with Kytea before running sacreBLEU.", "3 1 https://paracrawl.eu/ 2 Character substitution uses neighbor letters on the QWERTY keyboard, so accented characters are not substituted.", "Japanese is misspelled for each character with probability of 0.1, and it only supports deletion and repetition.", "Letter case changing does not apply to Japanese.", "3 http://www.phontron.com/kytea/ Model Variations We focus on comparing different (stochastic) subword segmentation strategies: BPE (Sennrich et al., 2016), BPE-Dropout (Provilkov et al., 2019), and SentencePiece (Kudo, 2018).", "Subword regularization methods (i.e., BPE-Dropout and SentencePiece) generate various segmentations for the same word, so the resulting NMT model better learns the meaning of less frequent subwords and should be more robust to noise that yields unusual subword combinations, such as misspelling.", "We use them only in offline training data pre-processing steps, which requires no modification to the NMT model.", "4 4 Experimental Results As shown in Table 3, there is no clear winner among the three subword segmentation models based on BLEU scores on original WMT or KTJ test sets.", "This observation is different from results reported by Kudo (2018) and Provilkov et al. (2019).", "One major difference from previous work is the size of the training data, which is much larger in our experiments subword regularization is presumably preferable on low-resource settings.", "However, both our proposed metrics (i.e., robustness and consistency) show clear trends of models' robustness to input perturbations across all languages we tested: BPE-Dropout > SentencePiece > BPE.", "This suggests that although we did not observe a significant impact of subword regularization on generic translation quality, the robustness of the models is indeed improved drastically.", "Unfortunately, it is unclear if subword regularization can help translating real-world noisy input, as shown in Table 4.", "MTNT and 4SQ contain several natural noise types such as grammar errors, emo-jis, with misspelling as the dominating noise type for English and French.", "The training data we use may already cover common natural misspellings, perhaps contributing to the failure of regularization methods to improve over BPE in this case.", "Robustness Versus Consistency Variation in output is not necessarily in itself a marker of reduced translation quality, but empirically, consistency and robustness nearly always provide same model rankings in Table 3.", "We conduct more comprehensive analysis on the correlation between them, and we collect additional data points by varying the noise level of both perturbations.", "Specif-4 We sample one subword segmentation for each source sequence with SentencePiece.", "EN 0.85 0.84 0.85 0.86 0.84 0.86", "ically, we use the following word misspelling probabilities: { 0 .", "05 , 0 .", "1 , 0 .", "15 , 0 .", "2 } and the following sentence case-changing probability values: { 0 .", "3 , 0 .", "5 , 0 .", "7 , 0 .", "9 } .", "As illustrated in Figure 1, consistency strongly correlates with robustness (sample Pearson's r = 0 . 91 to 0 . 98 ) within each language pair.", "This suggests that for this class of models, low consistency signals a drop in translation quality and the consistency score can be used as a robustness proxy when the reference translation is unavailable.", "Robustness Versus Noise Level In this paper, robustness is defined by giving a fixed perturbation function and its noise level.", "We observe consistent model rankings across language pairs, but is it still true if we vary the noise level?", "To test this, we plot the robustness data points from the last section against the noise level.", "Focusing on the misspelling perturbation for EN DE models, Figure 2 shows that varying the word misspelling probability does not change the ranking of the models, and the gap in the robustness measurement only increases with larger amount of noise.", "This observation applies to all perturbations and language pairs we investigated.", "We proposed two additional measures for NMT robustness which can be applied when both original and noisy inputs are available.", "These measure robustness as relative degradation in quality as well as consistency which quantifies variation in translation output irrespective of reference translations.", "We also tested two popular subword regularization techniques and their effect on overall performance and robustness.", "Our robustness metrics reveal a clear trend of subword regularization being much 50 60 70 80 90 100 0.00 0.05 0.10 0.15 0.20 R o bu s t n e ss Noise Level (word misspelling probability) BPE BPE-Dropout SentencePiece Figure 2: Varying the synthetic word misspelling probability for EN DE models does not change the model ranking w.r.t. robustness (in percentage).", "more robust to input perturbations than standard BPE.", "Furthermore, we identify a strong correlation between robustness and consistency in these models indicating that consistency can be used to estimate robustness on data sets or domains lacking reference translations.", "We thank the anonymous reviewers for their comments and suggestions." ]
[ "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "method", "abstain", "objective", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "objective", "abstain", "abstain", "result", "abstain", "method", "other" ]
[ "Non-autoregressive encoder-decoder models greatly improve decoding speed over autoregressive models, at the expense of generation quality.", "To mitigate this, iterative decoding models repeatedly infill or refine the proposal of a non-autoregressive model.", "However, editing at the level of output sequences limits model flexibility.", "We instead propose iterative realignment , which by refin-ing latent alignments allows more flexible edits in fewer steps.", "Our model, Align-Refine, is an end-to-end Transformer which iteratively realigns connectionist temporal classification (CTC) alignments.", "On the WSJ dataset, Align-Refine matches an autoregressive baseline with a 14 decoding speedup; on LibriSpeech, we reach an LM-free test-other WER of 9.0% (19% relative improvement on comparable work) in three iterations.", "We release our code at https://github.com/ amazon-research/align-refine .", "Transformer encoder-decoder models (Vaswani et al., 2017) have achieved high performance in sequence-to-sequence tasks like neural machine translation (NMT; Edunov et al., 2018) and end-to-end automatic speech recognition (ASR; Karita et al., 2019).", "However, like their recurrent predecessors, these models are autoregressive at inference: tokens are generated sequentially, with each token conditioned on all previous tokens.", "This makes decoding time linear in output sequence length, which is slow for long sequences.", "By contrast, non-autoregressive (NAR) models decode all output tokens independently and in parallel.", "When combined with self-attention, one gets fast, constant-time inference in NMT (Gu et al., 2018) and end-to-end ASR (Salazar et al., 2019).", "However, these models underperform their autoregressive counterWork done during an internship at Amazon AWS AI.", "parts, as the conditional independence between output tokens results in globally inconsistent outputs.", "To mitigate these issues, infilling methods like Mask-Predict (Ghazvininejad et al., 2019) refine an initial non-autoregressive proposal, repeatedly predicting a masked subset of low-confidence proposal tokens in a fixed number of decoding passes.", "In ASR, most iterative non-autoregressive methods use infilling, like A-FMLM (Chen et al., 2020), Imputer (Chan et al., 2020), and Mask-CTC (Higuchi et al., 2020).", "However, during training, infilling requires partial proposals to be simulated by synthetically masking ground truths or samples from an expert.", "The resulting train-test mismatch leads to poor-quality generation.", "An alternative proposed in NMT is iterative refinement (Lee et al., 2018); here, full proposals are predicted and trained on at each iteration, with no masking required.", "This 1 Gu et al. call this the multimodality problem , as it is induced by the highly multimodal distribution of target translations (or in the case of ASR, frame-level alignments).", "reduces mismatch but still lacks flexibility: among other problems, working at the output sequence level constrains every iteration to the initial length L predicted by the model, making it difficult to correct insertions or deletions.", "In this work, we propose iterative realignment , a variation on iterative refinement where latent alignments are edited instead.", "By working at the alignment level, we avoid length prediction and enable more powerful edits, while preserving the flexibility and reduced train-test mismatch of iterative refinement.", "Our model, Align-Refine, demonstrates this on ASR: after a Transformer encoder first produces a noisy non-autoregressive proposal, the refiner (a non-causal Transformer decoder) repeatedly conditions on both the previous proposal and the initial encoder representation to produce a better proposal.", "Both the encoder and refiner are supervised with CTC (Graves et al., 2006), a loss de-fined between sequences and their latent monotonic alignments.", "Unlike past methods, Align-Refine requires no token masking or expert policies.", "We validate our approach on two English ASR benchmarks, improving on state-of-the-art infilling methodsMask-CTC and Imputerin word error rate (WER) and/or inference time (as measured by real-time factor 2 , or RTF).", "On WSJ, we close the WER gap with an autoregressive baseline at 1/14th the RTF, outperforming Mask-CTC after a single iteration.", "On LibriSpeech, we improve on published (LM-free) NAR results by 2.1% WER absolute on test-other at < 1/4th the effective layers (and thus estimated RTF) of Imputer.", "Our work suggests that iterative realignment is a promising direction for other sequence-to-sequence tasks, such as NMT.", "CTC (Graves et al., 2006) is a strategy for defin-ing latent monotonic alignments from an input sequence x to a shorter output sequence y .", "Let _ ', termed a blank , be an additional possible output token; then a CTC alignment is reduced to an output sequence by collapsing repeated labels then removing blanks, e.g., AB__BB_A (cid:55) ABBA .", "Since this is a many-to-one process, to calculate p ( y | x ) , we marginalize over all alignments ( y ) mapping to an output y .", "Assuming alignment labels are 2 The decoding time divided by the length of audio.", "Iterative refinement methods non-autoregressively refine an initial proposal y 0 of length L , with full re-predicted proposals y k conditioned on previous proposals y k 1 and the input sequence x : p ( y k | x ) = (cid:89) p ( y ki | x , y k 1 ) .", "By contrast, infilling methods such as Mask-Predict (Ghazvininejad et al., 2019) mask part of the previous proposal every iteration.", "Only masked positions are re-predicted, conditioned on all unmasked tokens and the inputs x : p ( y k mask | x ) = (cid:89) i p ( y ki | x , y k 1 \\ y k 1 mask ) .", "Typically, at each iteration a decreasing proportion of high-confidence tokens are unmasked 3 such that the full budget of K iterations is always required.", "Furthermore, state-of-the-art methods for English ASRMask-CTC (Higuchi et al., 2020, Figure 2) and Imputer (Chan et al., 2020)enforce an added constraint that no decisions may be reversed at all: p ( y ki | x ) = 1 [ y ki = y k 1 i ] when y k 1 i (cid:54) = <MASK> .", "We propose Align-Refine, a variation of iterative refinement which, like its predecessor, always keeps the proposal fully formed; this permits flexibility in decoding (as iteration can be stopped at any time) and potential speedups (errors seen and fixed in parallel; easy utterances refined in fewer steps).", "However, unlike Lee et al. (2018), our proposals are latent CTC alignments a k 1: | x | , not outputs y k 1: L .", "In previous work, working at the output sequence level is another source of irreversibility: the length L must be predicted even before the initial proposal p enc ( y 0 | L, x ) is generated, either explicitly (sam-pling from a modelled length distribution p ( L | x ) ; Lee et al., 2018) or implicitly (collapsing CTC outputs before infilling; Higuchi et al., 2020).", "Either way, the decoder cannot fix insertions/deletions.", "Align-Refine (char.) , 5 iterations", "By contrast, since CTC alignments are by nature fixed-length, Align-Refine easily handles insertion-s/deletions by placing/replacing blanks and spaces (Figure 2).", "At a given iteration k , we have: p ( y | x ) = E a k 1 (cid:88) a k ( y ) p ref ( a k | a k 1 , x ) .", "This expectation over previous alignments is intractable, so like Lee et al. (2018) we take a deterministic lower bound: sampling the mode a k 1 .", "After we marginalize over all a k , the loss is JCTC ( a k 1 , y ; ref , enc ) .", "Since we don't know a priori which k is final, we apply the loss at k = 1 , . . . , K with weights w 1 , . . . , w K for some hyper-parameter K .", "For k = 0 we get JCTC ( x , y ; enc ) .", "In summary, we take the greedy alignment at each iteration and apply the CTC loss, as shown in Figure 1 for K = 2 .", "In practice, we upweight the encoder and first iteration terms with weights and w 1 , then sum to give the total loss.", "For this and other training details, consult Appendix B, C. Data.", "We evaluate on two English ASR benchmarks: WSJ (81 hours; Paul and Baker, 1992) and LibriSpeech (960 hours; Panayotov et al., 2015).", "For WSJ, we run at the character level, matching Mask-CTC; for LibriSpeech, we build a 400-token BPE vocabulary, matching Imputer.", "We use 80-dim.", "filter banks and SpecAugment (Park et al., 2019).", "Model.", "For WSJ we use a 12-layer encoder and 6-layer decoder, as in Higuchi et al. (2020); each layer has 4 heads over 256 units.", "For LibriSpeech, we use 8 heads over 512 units.", "With CNN frontends these are 27M and 71M parameters.", "Unless stated otherwise, we do K = 4 training iterations.", "Decoding.", "We evaluate with decoding iterations k from { 0 , 1 , 3 , 5 , 10 } , exiting early on convergence (consecutive iterations are identical).", "The final CTC alignment is collapsed to give the result.", "To match previous non-autoregressive ASR work, we do not use a language model (LM).", "WSJ results (Table 1).", "Since Mask-CTC and Align-Refine share an identical architecture, the difference lies solely in training and evaluation.", "For both models, joint training with the refinement objective improves the encoder's performance as a standalone CTC model ( k = 0 ) to a similar degree.", "However, from k = 1 onwards, Align-Refine outperforms Mask-CTC, improving the initial encoder proposal by 1.9% absolute in just one iteration: # passes WER Model Enc Dec dev93 eval92 RTF Autoregressive baseline (Higuchi et al., 2020) CTC+A TTENTION 1 L 14.4 11.3 0.97* + beam search 1 >L 13.5 10.9 4.62* Previous work (Higuchi et al., 2020; Chan et al., 2020) CTC 1 22.2 17.9 0.03* MASK-CTC 1 0 16.3 12.9 0.03* 1 1 15.7 12.5 0.04* 1 5 15.5 12.2 0.05* 1 #mask 15.4 12.1 0.13* IMPUTER (8LYR ) 8 12.7 Our work CTC 1 18.6 15.0 0.036 MASK-CTC 1 5 15.3 12.8 0.072 ALIGN-REFINE 1 0 16.2 13.5 0.037 1 1 14.1 11.6 0.048 1 3 13.9 11.5 0.066 1 5 13.7 11.4 0.068 1 10 13.7 11.4 0.068 Table 1: Non-autoregressive ASR on WSJ.", "By k = 5 , Align-Refine closes the performance gap with a comparable autoregressive model at 1/14th the RTF.", "As for the 8-layer Imputer, it seems un-Align-Refine (subword) , up to 5 iterations Enc [WHEN___[DI[DI_CKI__[CAME_____[DOWN________[HIS____[A__UNUNTT___[S__IGHT_LY__[S[SLA__PP_ T _[HIM k=1 [WHEN___[DI[DI_CKI EE [CAME_____[DOWN________[HIS____[A__ _ UNTT___[S L _IGHT_LY__ _ [SLA__PP_ ED _[HIM k=2 [WHEN___[DI[DI_CKI _ E[CAME_____[DOWN________[HIS____[A__ _ UN _ T___[SL_IGHT_LY__ _ [SLA__PP_ED_[HIM k=3 (identical, so collapse early) ... End [WHEN [DICKIE [CAME [DOWN [HIS [AUNT [SLIGHTLY [SLAPPED [HIM Figure 3: LibriSpeech test-other utterance; reference matches the prediction.", "likely that re-introducing SpecAugment would outperform this augmented 12+6-layer autoregressive baseline; even if performances did match, RTFs would be higher than Align-Refine's (Section 5).", "LibriSpeech results (Table 2).", "Align-Refine gives 9.0% WER on test-other with no LM, outperforming published non-autoregressive models by 2.1% WER absolute.", "This is 5.1 points better than training the encoder with CTC only, even after SpecAugment is used (compare with 1.9 points from 16-layer CTC to Imputer).", "In Figure 3 we see the refiner make output-conditional edits that would be difficult for greedy CTC inference; we attribute our outsized gain on test-other to this LM-like behavior.", "Future work could start from stronger CTC encoders like QuartzNet (Kriman et al., 2020) to achieve even better results.", "In all, we have seen that Align-Refine improves performance over infilling methods like Imputer and Mask-CTC across tokenizations and dataset sizes (Tables 1 and 2).", "We saw improvements qualitatively as parallel and multi-stage insertion/deletion 0 1 2 3 4 5 6 7 Number of iterations 0% 10% 20% 30% 40% 50% 60% % o f e x a m p l e s r e m a i n i ng fi x e d clean other Figure 4: Proportion of utterances in each LibriSpeech test set whose alignments become fixed at iteration k .", "Number of iterations.", "In Figure 4 we graph how many utterances became fixed at each iteration k (i.e., where a k +1 = a k (cid:54) = a k 1 ).", "First, we see that the CTC alignment is always revised by the refiner, evidence that CTC's non-autoregressive greedy mode is fundamentally different from the conditionally dependent mode annealed to by the refiner.", "We see that upweighting w 1 and training with K = 4 largely confined the fixed point iteration index k to 4 or less.", "(labeled ); rather, they cycle through two or more alignments repeatedly.", "In these cases, the transcript is finalized except on a local set of tokens where the model flips back and forth, e.g., -OUS versus -US .", "One could mitigate this by stopping after an edit distance of 1 between alignments is reached (similar to Lee et al., 2018), or by comparing with proposals e.g., two iterations prior.", "Length independence.", "In Figure 5, we empirically validate the length independence of our method by plotting the fixed point iteration k versus the alignment length.", "The medians increase from k = 1 to k = 2 by lengths > 300, but no further (for the lengths in LibriSpeech).", "One observation is that as an utterance gets longer, the chance that multimodal corrections like [DI_CKI__ (cid:55) [DI_CKIEE (Figure 3) are made increases, which a further iteration then resolves ( [DI_CKIEE (cid:55) [DI_CKI_E ) independent of whether it affects the collapsed transcript.", "Interestingly, a few short alignments (e.g., lengths < 100) use from 3 all the way up to 8 iterations, perhaps due to lack of linguistic context the refiner can use for disambiguation.", "Speed and decoder depth.", "By factoring out refinement from feature processing, we can effectively adapt to variable inference budgets by adjusting the number of decoding iterations.", "While Imputer requires a separate run of the entire model for every iteration, for Align-Refine, only the decoder must be rerun.", "Another advantage is that since we have a full proposal at every iteration, in the vast majority of cases we exited early once the proposals stabilized.", "Kasai et al. (2021) critiqued this factorization in non-autoregressive models by showing autoregressive models could shift layers from decoder to encoder to reduce the speed gap; however, we show that Align-Refine benefits from a similar reallocation (Table 3): WER (test-other) for each k Model-(# enc)-(# dec) 0 1 2 3 5 ALIGN-REFINE -12-6 12.5 10.0 9.4 9.3 9.3 ALIGN-REFINE -15-3 10.5 9.5 9.4 9.3 9.3 ALIGN-REFINE -17-1 10.4 10.0 10.0 10.0 10.0 Table 3: Results with various encoder-decoder splits on LibriSpeech.", "The refiner needs some depth for best performance: 17-1 underperforms on test-other (though within 0.1 points on test-clean).", "However, 15-3 performs as well as 12-6 at k = 1 onwards, despite passing through half the number of decoder layers.", "At k = 3 , our LibriSpeech model's RTF was 0.171, while 15-3's RTF is 0.136.", "In this configuration, we pass through 15 encoder and at most 3 3 = 9 decoder layers.", "By contrast, Imputer inference passes through 8 16 = 128 encoder layers with the same alignment-length inputs and layer size.", "Limitations.", "The refiner sometimes makes edits which do not affect post-collapse outputs (Figure 3, k = 2 ); variants that use repetition tokens (ASG; Collobert et al., 2016) or prohibit repetition collapse (Chan et al., 2020) may mitigate this behavior.", "The initial downsampling also restricts what edits can be done in one step (Figure 2, k = 1 , 2 ).", "We found that Align-Refine performed worse when there were more alignment labels per word (Appendix A).", "In general, word-level errors like SAI FOOD HIS (Figure 2) may require more coordinated mechanisms to fix.", "Future work could integrate non-autoregressive LMs like BERT (Devlin et al., 2019) via alignment synthesis, fusion, or otherwise (e.g., Salazar et al., 2020).", "Some concurrent works (Inaguma et al., 2021; Tian et al., 2021) also propose reranking NAR candidates with a jointly-trained autoregressive decoder.", "We are grateful to the AWS Speech Science team and the Stanford NLP Group for their advice and feedback, to Yosuke Higuchi and William Chan for their helpful correspondence, and to the anonymous reviewers for their thoughtful feedback." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "abstain", "abstain", "result", "result", "result", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "method", "other", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "We focus on the study of conversational recommendation in the context of multi-type dialogs, where the bots can proactively and naturally lead a conversation from a nonrecommendation dialog (e.g., QA) to a recommendation dialog, taking into account user's interests and feedback.", "To facilitate the study of this task, we create a human-to-human Chinese dialog dataset DuRecDial (about 10k dialogs, 156k utterances), which contains multiple sequential dialogs for every pair of a recommendation seeker (user) and a recommender (bot).", "In each dialog, the recommender proactively leads a multi-type dialog to approach recommendation targets and then makes multiple recommendations with rich interaction behavior.", "This dataset allows us to systematically investigate different parts of the overall problem, e.g., how to naturally lead a dialog, how to interact with users for recommendation.", "Finally we establish baseline results on DuRecDial for future studies.", "1 1 Introduction In recent years, there has been a significant in-crease in the work of conversational recommendation due to the rise of voice-based bots (Chris-takopoulou et al., 2016; Li et al., 2018; Reschke et al., 2013; Warnestal, 2005).", "They focus on how to provide high-quality recommendations through dialog-based interactions with users.", "These work fall into two categories: (1) task-oriented dialog-modeling approaches (Christakopoulou et al., 2016; Sun and Zhang, 2018; Warnestal, 2005); (2) non-task dialog-modeling approaches with more freeform interactions (Kang et al., 2019; Li et al., 2018).", "Corresponding author: Wanxiang Che.", "1 Dataset and codes are publicly available at https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2020-DuRecDial.", "Almost all these work focus on a single type of dialogs, either task oriented dialogs for recommendation, or recommendation oriented open-domain conversation.", "Moreover, they assume that both sides in the dialog (especially the user) are aware of the conversational goal from the beginning.", "In many real-world applications, there are multiple dialog types in human-bot conversations (called multi-type dialogs ), such as chit-chat, task oriented dialogs, recommendation dialogs, and even question answering (Ram et al., 2018; Wang et al., 2014; Zhou et al., 2018b).", "Therefore it is crucial to study how to proactively and naturally make conversational recommendation by the bots in the context of multi-type human-bot communication.", "For example, the bots could proactively make recommendations after question answering or a task dialog to improve user experience, or it could lead a dialog from chitchat to approach a given product as commercial advertisement.", "However, to our knowledge, there is less previous work on this problem.", "To address this challenge, we present a novel task, conversational recommendation over multi-type dialogs, where we want the bot to proactively and naturally lead a conversation from a non-recommendation dialog to a recommendation dialog.", "For example, in Figure 1, given a starting dialog such as question answering, the bot can take into account user's interests to determine a recommendation target (the movie < The message > ) as a long-term goal, and then drives the conversation in a natural way by following short-term goals, and completes each goal in the end.", "Here each goal specifies a dialog type and a dialog topic.", "Our task setting is different from previous work (Chris-takopoulou et al., 2016; Li et al., 2018).", "First, the overall dialog in our task contains multiple dialog types, instead of a single dialog type as done in previous work.", "Second, we emphasize the initiative of the recommender, i.e. the bot proactively plans Name: (Fanyu Yang) Gender: (Male) Age: 20 Domains that the user likes: movie, music Stars that the user likes: (Xun Zhou) (Rene Liu) Recommendation accepted: < >(Stolen Life) Recommendation rejected: < >(The little prince) User profile Conversational recommendation Knowledge graph < > (Stolen life) (movie star) < >(The message) (Xun Zhou) (the best actress of the Asian Film Awards) (the most popular actress of the Golden Eagle Award of China TV) , (It has refined characters and capricious plots.) (Rene Liu) < 1937> (Don't cry, Nanking!) actor c o mm e n t actor (It shows the director's thinking on war, human nature.) c o mm e n t comment type type (Historical war film) Goal planning: QA about <Stolen life>, chitchat about Xun Zhou, recommending the movie <The message>, recommending <Don't cry,", "a goal sequence to lead the dialog, and the goals are unknown to the users.", "When we address this task, we will encounter two difficulties: (1) how to proactively and naturally lead a conversation to approach the recommendation target, (2) how to iterate upon initial recommendation with the user.", "To facilitate the study of this task, we create a human-to-human rec ommendation oriented multi-type Chinese dial og dataset at Bai du ( DuRecDial ).", "In DuRecDial , every dialog contains multi-type dialogs with natural topic transitions, which corresponds to the first difficulty.", "Moreover, there are rich interaction variability for recommendation, corresponding to the second difficulty.", "In addition, each seeker has an explicit profile for the modeling of personalized recommendation, and multiple dialogs with the recommender to mimic real-world application scenarios.", "To address this task, inspired by the work of Xu et al. (2020), we present a m ultig oal driven c onversation g eneration framework ( MGCG ) to handle multi-type dialogs simultaneously, such as QA/chitchat/recommendation/task etc..", "It consists of a goal planning module and a goal-guided responding module.", "The goal-planning module can conduct dialog management to control the dialog flow, which determines a recommendation target as the final goal with consideration of user's interests and online feedback, and plans appropriate short-term goals for natural topic transitions.", "To our knowledge, this goal-driven dialog policy mechanism for multi-type dialog modeling is not studied in previous work.", "The responding module produces responses for completion of each goal, e.g., chatting about a topic or making a recommendation to the user.", "We conduct an empirical study of this framework on DuRecDial .", "This work makes the following contributions: We identify the task of conversational recommendation over multi-type dialogs.", "To facilitate the study of this task, we create a novel dialog dataset DuRecDial , with rich variability of dialog types and domains as shown in Table 1. We propose a conversation generation framework with a novel mixed-goal driven dialog policy mechanism.", "Datasets for Conversational Recommendation To facilitate the study of conversational recommendation, multiple datasets have been created in previous work, as shown in Table 1. The first recommendation dialog dataset is released by Dodge et al. (2016), which is a synthetic dialog dataset built with the use of the classic MovieLens ratings dataset and natural language templates.", "Li et al. (2018) creates a human-to-human multi-turn recommendation dialog dataset, which combines the elements of social chitchat and recommendation dialogs.", "Kang et al. (2019) provides a recommendation dialogue dataset with clear goals, and Moon et al. (2019) collects a parallel Dialog KG corpus for recommendation.", "Compared with them, our dataset contains multiple dialog types, multi-domain use cases, and rich interaction variability.", "Datasets for Knowledge Grounded Conversation As shown in Table 1, CMU DoG (Zhou et al., 2018a) explores two scenarios for Wikipedia-article grounded dialogs: only one participant has access to the document, or both have.", "IIT DoG (Moghe et al., 2018) is another dialog dataset for movie chats, wherein only one participant has access to background knowledge, such as IMDB's facts/plots, or Reddit's comments.", "Dinan et al. (2019) creates a multi-domain multi-turn conversations grounded on Wikipedia articles.", "OpenDialKG (Moon et al., 2019) provides a chit-chat dataset between two agents, aimed at the modeling of dialog logic by walking over knowledge graph-Freebase.", "Wu et al. (2019) provides a Chinese dialog dataset-DuConv, where one participant can proactively lead the conversation with an explicit goal.", "KdConv (Zhou et al., 2020) is a Chinese dialog dataset, where each dialog contains in-depth discussions on multiple topics.", "In comparison with them, our dataset contains multiple dialog types, clear goals to achieve during each conversation, and user profiles for personalized conversation.", "Models for Conversational Recommendation Previous work on conversational recommender systems fall into two categories: (1) task-oriented dialog-modeling approaches in which the systems ask questions about user preference over prede-fined slots to select items for recommendation (Christakopoulou et al., 2018, 2016; Lee et al., 2018; Reschke et al., 2013; Sun and Zhang, 2018; Warnestal, 2005; Zhang et al., 2018b); (2) non-task dialog-modeling approaches in which the models learn dialog strategies from the dataset without pre-defined task slots and then make recommendations without slot filling (Chen et al., 2019; Kang et al., 2019; Li et al., 2018; Moon et al., 2019; Zhou et al., 2018a).", "Our work is more close to the second category, and differs from them in that we conduct multi-goal planning to make proactive conversational recommendation over multi-type dialogs.", "Goal Driven Open-domain Conversation Generation Recently, imposing goals on open-domain conversation generation models having attracted lots of research interests (Moon et al., 2019; Li et al., 2018; Tang et al., 2019b; Wu et al., 2019) since it provides more controllability to conversation generation, and enables many practical applications, e.g., recommendation of engaging entities.", "However, these models can just produce a dialog towards a single goal, instead of a goal sequence as done in this work.", "We notice that the model by Xu et al. (2020) can conduct multi-goal planning for conversation generation.", "But their goals are limited Ground-truthprofile of the seeker S k Seeker profile built so far P i-1Sk Task templates Knowledgegraph", "to in-depth chitchat about related topics, while our goals are not limited to in-depth chitchat.", "We define one person in the dialog as the recommendation seeker (the role of users) and the other as the recommender (the role of bots).", "We ask the recommender to proactively lead the dialog and then make recommendations with consideration of the seeker's interests, instead of the seeker to ask for recommendation from the recommender.", "Figure 2 shows our task design.", "The data collection consists of three steps: (1) collection of seeker profiles and knowledge graph; (2) collection of task templates; (3) annotation of dialog data.", "3 Next we will provide details of each step.", "Explicit seeker profiles Each seeker is equipped with an explicit unique profile (a ground-truth profile), which contains the information of name, gender, age, residence city, occupation, and his/her preference on domains and entities.", "We automatically generate the ground-truth profile for each seeker, which is known to the seeker, and unknown to the recommender.", "We ask that the utterances of each seeker should be consistent with his/her profile.", "We expect that this setting could encourage the seeker to clearly and self-consistently explain what he/she likes/dislikes.", "In addition, the 2 Please see Appendix 1. for more details.", "recommender can acquire seeker profile information only through dialogs with the seekers.", "Knowledge graph Inspired by the work of document grounded conversation (Ghazvininejad et al., 2018; Moghe et al., 2018), we provide a knowledge graph to support the annotation of more informative dialogs.", "We build them by crawling data from Baidu Wiki and Douban websites.", "Table 3 presents the statistics of this knowledge graph.", "Multi-type dialogs for multiple domains We expect that the dialog between the two task-workers starts from a non-recommendation scenario, e.g., question answering or social chitchat, and the recommender should proactively and naturally guide the dialog to a recommendation target (an entity).", "The targets usually fall into the seeker's interests, e.g., the movies of the star that the seeker likes.", "Moreover, to be close to the setting in practical applications, we ask each seeker to conduct multiple sequential dialogs with the recommender.", "In the first dialog, the recommender asks questions about seeker profile.", "Then in each of the remaining dialogs, the recommender makes recommendations based on the seeker's preferences collected so far, and then the seeker profile is automatically updated at the end of each dialog.", "We ask that the change of seeker profile should be reflected in later dialogs.", "The difference between these dialogs lies in sub-dialog types and recommended entities.", "Rich variability of interaction How to iterate upon initial recommendation plays a key role in the interaction procedure for recommendation.", "To provide better supervision for this capability, we expect that the task workers can introduce diverse interaction behaviors in dialogs to better mimic the decision-making process of the seeker.", "For example, the seeker may reject the initial recommendation, or mention a new topic, or ask a question about an entity, or simply accept the recommendation.", "The recommender is required to respond appropriately and follow the seeker's new topic.", "Task templates as annotation guidance Due to the complexity of our task design, it is very hard to conduct data annotation with only high-level instructions mentioned above.", "Inspired by the work of MultiWOZ (Budzianowski et al., 2018), we provide a task template for each dialog to be annotated, which guides the workers to annotate in the way we expect them to be.", "As shown in Table 2, each template contains the following information: (1) a goal sequence, where each goal consists of two Goals Goal description Goal1: QA (dialog type) about the movie < Stolen life > (dialog topic) The seeker takes the initiative, and asks for the information about the movie < Stolen life > ; the recommender replies according to the given knowledge graph; finally the seeker provides feedback.", "elements, a dialog type and a dialog topic, corresponding to a sub-dialog.", "(2) a detailed description about each goal.", "We create these templates by (1) first automatically enumerating appropriate goal sequences that are consistent with the seeker's interests and have natural topic transitions and (2) then generating goal descriptions with the use of some rules and human annotation.", "To obtain this data, we develop an interface and a pairing mechanism.", "We pair up task workers and give each of them a role of seeker or recommender.", "Then the two workers conduct data annotation with the help of task templates, seeker profiles and knowledge graph.", "In addition, we ask that the goals in templates must be tagged in every dialog.", "Data structure We organize the dataset of DuRecDial according to seeker IDs.", "In DuRecDial , there are multiple seekers (each with a different profile) and only one recommender.", "Each seeker s k has multiple dialogs { d s k i } i with the recommender.", "For each dialog d s k i , we provide a knowledge graph and a goal sequence for data annotation, and a seeker profile updated with this dialog.", "Data statistics Table 3 provides statistics of knowledge graph and DuRecDial , indicating rich Knowledgegraph#Domains 7 #Entities 21,837 #Attributes 454 #Triples 222,198 DuRecDial #Dialogs 10,190 #Sub-dialogs for QA/Rec/task/chitchat 6,722/8,756/3,234/10,190 #Utterances 155,477 #Seekers 1362 #Entitiesrecom-mended/accepted/rejected11,162/8,692/2,470/ Table 3: Statistics of knowledge graph and DuRecDial.", "Data quality We conduct human evaluations for data quality.", "A dialog will be rated 1 if it follows the instruction in task templates and the utterances are fluent and grammatical, otherwise 0.", "Then we ask three persons to judge the quality of 200 randomly sampled dialogs.", "Finally we obtain an average score of 0.89 on this evaluation set.", "Problem definition Let D s k = { d s k i } N Dsk i =0 denote a set of dialogs by the seeker s k (0 k < N s ) , where ND sk is the number of dialogs by the seeker s k , and N s is the number of seekers.", "Recall that we attach each dialog (say d s k i ) with an updated seeker profile (denoted as P s k i ), a knowledge graph K , and a goal sequence G = { g t } T g 1 t =0 .", "Given a context X with utterances { u j } m 1 j =0 from the dialog d s k i , a goal history G (cid:48) = { g 0 , ..., g t 1 } ( g t 1 as the goal for u m 1 ), P s k i 1 and K , the aim is to provide an appropriate goal g c to determine where the dialog goes and then produce a proper response Y = { y 0 , y 1 , ..., y n } for completion of the goal g c .", "Framework overview The overview of our framework MGCG is shown in Figure 3. The goal-planning module outputs goals to proactively and naturally lead the conversation.", "It first takes as input X , G (cid:48) , K and P s k i 1 , then outputs g c .", "The responding module is responsible for completion of each goal by producing responses conditioned on X , g c , and K .", "For implementation of the responding module, we adopt a retrieval model and a generation model proposed by Wu et al. (2019), and modify them to suit our task.", "For model training, each [context, response] in d s k i is paired with its ground-truth goal, P s k i and", "K .", "These goals will be used as answers for training of the goal-planning module, while the tuples of [context, a ground-truth goal, K , response] will be used for training of the responding module.", "As shown in Figure", "3(a), we divide the task of goal planning into two sub-tasks, goal completion estimation, and current goal prediction.", "Goal completion estimation For this subtask, we use Convolutional neural network (CNN)(Kim, 2014) to estimate the probability of goal completion by: PGC ( l = 1 | X, g t 1 ) .", "(1) Current goal prediction If g t 1 is not completed ( PGC < 0 . 5 ), then g c = g t 1 , where g c is the goal for Y .", "Otherwise, we use CNN based multi-task classification to predict the current goal by maximizing the following probability: g t = arg max g ty ,g tp PGP ( g ty , g tp | X, G (cid:48) , P s k i , K ) , (2) g c = g t , (3) where g ty is a candidate dialog type and g tp is a candidate dialog topic.", "we modify the original retrieval model to suit our task by emphasizing the use of goals.", "As shown in Figure", "3(b), our response ranker consists of five components: a context-response representation module (C-R Encoder), a knowledge representation module (Knowledge Encoder), a goal representation module (Goal Encoder), a knowledge selection module (Knowledge Selec-tor), and a matching module (Matcher).", "The C-R Encoder has the same architecture as BERT (Devlin et al., 2018), and it takes a context X and a candidate response Y as segment a and segment b in BERT, and leverages a stacked self-attention to produce the joint representation of X and Y , denoted as xy .", "Each related knowledge knowledge i is also encoded as a vector by the Knowledge Encoder using a bi-directional GRU (Chung et al., 2014), which can be formulated as k i = [ h T k ; h 0 ] , where T k denotes the length of knowledge, h T k and h 0 represent the last and initial hidden states of the two directional GRU respectively.", "The Goal Encoder uses bi-directional GRUs to encode a dialog type and a dialog topic for goal representation (denoted as g c ).", "For knowledge selection, we make the context-response representation xy attended to all knowledge vectors k i and get the attention distribution: p ( k i | x, y, g c ) = exp ( MLP ([ xy ; g c ]) k i ) (cid:80) j exp ( MLP ([ xy ; g c ]) k j ) (4) We fuse all related knowledge information into a single vector k c = (cid:80) i p ( k i | x, y, g c ) k i .", "We view k c , g c and xy as the information from knowledge source, goal source and dialogue source respectively, and fuse the three information sources into a single vector via concatenation.", "Finally we calculate a matching probability for each Y by: p ( l = 1 | X, Y, K, g c ) = softmax ( MLP ([ xy ; k c ; g c ])) (5) 4.4 Generation-based Response Model To highlight the importance of conversational goals, we also modify the original generation model by introducing an independent encoder for goal representation.", "As shown in Figure", "3(c), our generator consists of five components: a Context Encoder, a Knowledge Encoder, a Goal Encoder, a Knowledge Selector, and a Decoder.", "Given a context X , conversational goal g c and knowledge graph K , our generator first encodes them as vectors with the use of above encoders (based on bi-directional GRUs).", "We assume that using the correct response will be conducive to knowledge selection.", "Then minimizing KLDivloss will make the effect of knowledge selection in the prediction stage (not use the correct response) close to that of knowledge selection with correct response.", "For knowledge selection, the model learns knowledge-selection strategy through minimizing the KLDivLoss between two distributions, a prior distribution p ( k i | x, g c ) and a posterior distribution p ( k i | x, y, g c ) .", "It is formulated as: p ( k i | x, y, g c ) = exp ( k i MLP ([ x ; y ; g c ])) (cid:80) N j =1 exp ( k j MLP ([ x ; y ; g c ])) (6) p ( k i | x, g c ) = exp ( k i MLP ([ x ; g c ]) (cid:80) N j =1 exp ( k j MLP ([ x ; g c ]) (7) LKL ( ) = 1 NN (cid:88) i =1 p ( k i | x, y, g c ) log p ( k i | x, y, g c ) p ( k i | x, g c ) (8) In training procedure, we fuse all related knowledge information into a vector k c = (cid:80) i p ( k i | x, y, g c ) k i , same as the retrieval-based method, and feed it to the decoder for response generation.", "In testing procedure, the fused knowledge is estimated by k c = (cid:80) i p ( k i | x, g c ) k i without ground-truth responses.", "The decoder is implemented with the Hierarchical Gated Fusion Unit described in (Yao et al., 2017), which is a standard GRU based decoder enhanced with external knowledge gates.", "In addition to the loss LKL ( ) , the generator uses the following losses: NLL Loss : It computes the negative log-likelihood of the ground-truth response ( LNLL ( ) ).", "BOW Loss : We use the BOW loss proposed by Zhao et al. (2017), to ensure the accuracy of the fused knowledge k c by enforcing the relevancy between the knowledge and the true response.", "4 Specifically, let w = MLP ( k c ) R | V | , where | V | is vocabulary size.", "We define: p ( y t | k c ) = exp ( w y t ) (cid:80) V v =1 exp ( w v ) .", "Then, the BOW loss is defined to minimize:", "Finally, we minimize the following loss function:", "L ( ) = LKL ( ) + LNLL ( ) + LBOW ( ) (11)", "where is a trainable parameter.", "We split DuRecDial into train/dev/test data by randomly sampling 65%/10%/25% data at the level of seekers, instead of individual dialogs.", "To evaluate the contribution of goals, we conduct an ablation study by replacing input goals with UNK for responding model.", "For knowledge usage, we conduct another ablation study, where we remove input knowledge by replacing them with UNK.", "S2S We implement a vanilla sequence-to-sequence", "model (Sutskever et al., 2014), which is widely used for open-domain conversation generation.", "MGCG R : Our system with automatic goal planning and a retrieval based responding model.", "MGCG G : Our system with automatic goal planning and a generation based responding model.", "4 The BOW loss is to introduce an auxiliary loss that requires the decoder network to predict the bag-of-words in the response to tackle the vanishing latent variable problem.", "5 Please see Appendix 2. for model parameter settings.", "Metrics For automatic evaluation, we use several common metrics such as BLEU (Papineni et al., 2002), F1, perplexity (PPL), and DISTINCT (DIST-2) (Li et al., 2016) to measure the relevance, fluency, and diversity of generated responses.", "Following the setting in previous work (Wu et al., 2019; Zhang et al., 2018a), we also measure the performance of all models using Hits@1 and Hits@3.", "6 Here we let each model to select the best response from 10 candidates.", "Those 10 candidate responses consist of the ground-truth response generated by humans and nine randomly sampled ones from the training set.", "Moreover, we also evaluate the knowledge-selection capability of each model by calculating knowledge precision/recall/F1 scores as done in Wu et al. (2019).", "7 In addition, we also report the performance of our goal planning module, including the accuracy of goal completion estimation, dialog type prediction, and dialog topic prediction.", "Results Our goal planning model can achieve accuracy scores of 94.13%, 91.22%, and 42.31% for goal completion estimation, dialog type prediction, and dialog topic prediction.", "The accuracy of dialog topic prediction is relatively low since the num-6 Candidates (including golden response) are scored by PPL using the generation-based model, then candidates are sorted based on the scores, and Hits@1 and Hits@3 are calculated.", "ber of topic candidates is very large (around 1000), leading to the difficulty of topic prediction.", "As shown in Table 4, for response generation, both MGCG R and MGCG G outperform S2S by a large margin in terms of all the metrics under the same model setting (without gl.+kg., with gl., or with gl.+kg.).", "Moreover, MGCG R performs better in terms of Hits@k and DIST-2, but worse in terms of knowledge F1 when compared to MGCG G. 8 It might be explained by that they are optimized on different metrics.", "We also found that the methods using goals and knowledge outperform those without goals and knowledge, confirming the benefits of goals and knowledge as guidance information.", "Metrics: The human evaluation is conducted at the level of both turns and dialogs.", "For turn-level human evaluation, we ask each model to produce a response conditioned on a given context, the predicted goal and related knowledge.", "9 The generated responses are evaluated by three annotators in terms of fluency, appropriateness, informativeness, and proactivity.", "The appropriateness measures if the response can complete current goal and it is also relevant to the context.", "The informativeness measures if the model makes full use of knowledge in the response.", "The proactivity mea-8 We calculate an average of F1 over all the dialogs.", "It might result in that the value of F1 is not between P and R. 9 Please see Appendix 3. for more details.", "sures if the model can successfully introduce new topics with good fluency and coherence.", "For dialogue-level human evaluation, we let each model converse with a human and proactively make recommendations when given the predicted goals and related knowledge.", "10 For each model, we collect 100 dialogs.", "These dialogs are then evaluated by three persons in terms of two metrics: (1) goal success rate that measures how well the conversation goal is achieved, and (2) coherence that measures relevance and fluency of a dialog as a whole.", "All the metrics has three grades: good(2), fair(1), bad(0).", "For proactivity, 2 indicates that the model introduces new topics relevant to the context, 1 means that no new topics are introduced, but knowledge is used, 0 means that the model introduces new but irrelevant topics.", "For goal success rate, 2 means that the system can complete more than half of the goals from goal planning module, 0 means the system can complete no more than one goal, otherwise 1.", "For coherence, 2/1/0 means that two-thirds/one-third/very few utterance pairs are coherent and fluent.", "Results All human evaluations are conducted by three persons.", "As shown in Table 5, our two systems outperform S2S by a large margin, especially in terms of appropriateness, informativeness, goal success rate and coherence.", "In particular, S2S tends to generate safe and uninformative responses, failing to complete goals in most of dialogs.", "Our two systems can produce more appropriate and informative responses to achieve higher goal success rate with the full use of goal information and knowledge.", "Moreover, the retrieval-based model performs better in terms of fluency since its response is selected from the original human utterances, not automatically generated.", "But it performs worse on all the other metrics when compared to the generation-based model.", "It might be caused by the limited number of retrieval candidates.", "Finally, it can be seen that there is still much room for performance improvement in terms of appropriateness and goal success rate, which will be left as the future work.", "In order to further analyze the relationship between knowledge usage and goal completion, we provide the number of failed goals, completed goals, and used knowledge for each method over different dialog types in Table 6.", "We see that the number of 10 Please see Appendix 4. for more details.", "used knowledge is proportional to goal success rate across different dialog types or different methods, indicating that the knowledge selection capability is crucial to goal completion through dialogs.", "Moreover, the goal of chitchat dialog is easier to complete in comparison with others, and QA and recommendation dialogs are more challenging to complete.", "How to strengthen knowledge selection capability in the context of multi-type dialogs, especially for QA and recommendation, is very important, which will be left as the future work.", "We identify the task of conversational recommendation over multi-type dialogs, and create a dataset DuRecDial with multiple dialog types and multi-domain use cases.", "We demonstrate usability of this dataset and provide results of state of the art models for future studies.", "The complexity in DuRecDial makes it a great testbed for more tasks such as knowledge grounded conversation (Ghazvininejad et al., 2018), domain transfer for dialog modeling, target-guided conversation (Tang et al., 2019a) and multi-type dialog modeling (Yu et al., 2017).", "The study of these tasks will be left as the future work.", "We would like to thank Ying Chen for dataset annotation and thank Yuqing Guo and the reviewers for their insightful comments.", "This work was supported by the National Key Research and Development Project of China (No. 2018AAA0101900) and the Natural Science Foundation of China (No. 61976072)." ]
[ "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "abstain", "method", "objective", "objective", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "objective", "other", "objective", "abstain", "other", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "other", "other" ]
[ "Personalized news recommendation methods are widely used in online news services.", "These methods usually recommend news based on the matching between news content and user interest inferred from historical behaviors.", "However, these methods usually have difficulties in making accurate recommendations to cold-start users, and tend to recommend similar news with those users have read.", "In general, popular news usually contain important information and can attract users with different interests.", "Besides, they are usually diverse in content and topic.", "Thus, in this paper we propose to incorporate news popularity information to alleviate the cold-start and diversity problems for personalized news recommendation.", "In our method, the ranking score for recommending a candidate news to a target user is the combination of a personalized matching score and a news popularity score.", "The former is used to capture the personalized user interest in news.", "The latter is used to measure time-aware popularity of candidate news, which is predicted based on news content, recency, and real-time CTR using a unified framework.", "Besides, we propose a popularity-aware user encoder to eliminate the popularity bias in user behaviors for accurate interest modeling.", "Experiments on two real-world datasets show our method can effectively improve the accuracy and diversity for news recommendation.", "Personalized news recommendation is a useful technique to help users alleviate information overload when visiting online news platforms (Wu et al., 2020d,b, 2021; Ge et al., 2020).", "Existing personalized news recommendation methods usually recommend news to a target user based on the matching between the content of candidate news and user interest inferred from previous behaviors (Zhu et al., 2019; Wu et al., 2019f).", "For example, Wu et al. 6.1 magnitude quake rattles Alaska Biden aims to rebuild and expand legal immigration Returning to normal is not simple for everyone Man accused of plotting Walmart attack arrested Russia diplomat warns US ahead of summit Black Wall Street was shattered 100 years ago Figure 1: Several example popular news.", "(2019e) proposed to model news content from news title based on multi-head self-attention.", "In addition, they modeled user interest from the previously clicked news articles with multi-head self-attention to capture the relatedness between different behaviors.", "An et al. (2019) proposed to use CNN network to learn news embeddings from news titles and categories, and model both long-term and short-term user interests from news click behaviors.", "However, these personalized news recommendation methods usually have difficulties in making accurate recommendations to cold-start users, since the behaviors of these users are very sparse and it is difficult to model their interest (Trevisiol et al., 2014).", "Besides, these methods tend to recommend similar news with those users have read (Nguyen et al., 2014), which may hurt user experience and is not beneficial for them to receive new information.", "The motivation for this work is that popular news usually convey important information such as catastrophes, epidemics, presidential election and so on, as shown in Fig.", "1. These popular news can attract many users to read and discuss even if they have different personal interest (Yang, 2016).", "In addition, popular news are diverse in content and can cover many different topics (Houidi et al., 2019).", "Thus, incorporating popular news has the potential to alleviate the cold-start and diversity problems in personalized news recommendation.", "In this paper, we propose a new method named PP-Rec for news recommendation 1 , which can consider not only personalized user interest in news but also the popularity of candidate news.", "In our method, the ranking score of recommending a candidate news to a target user is the combination of a personalized matching score and a news popularity score.", "The personalized matching score is used to measure personal user interest in the content of candidate news.", "The news popularity score is used to measure the time-aware popularity of candidate news.", "Since news popularity is influenced by many different factors such as content and freshness, we propose a unified model to predict time-aware news popularity based on news content, recency, and near real-time click-through rate (CTR).", "These two scores are combined via a personalized aggregator for news ranking, which can capture the personalized preferences of different users in popular news.", "Moreover, we propose a knowledge-aware news encoder to generate news content embeddings from both news texts and entities.", "Besides, since news popularity can effect users' click behaviors (Zheng et al., 2010) and lead to bias in behavior based user interest modeling, we propose a popularity-aware user encoder which can consider the popularity bias in user behaviors and learn more accurate user interest representation.", "Extensive experiments on two real-world datasets show PP-Rec can effectively improve the performance of news recommendation in terms of both accuracy and diversity.", "Personalized news recommendation are widely used in online news platforms (Liu et al., 2010; Bansal et al., 2015; Wu et al., 2020d,c, 2019d).", "Existing personalized news recommendation methods usually rank candidate news for a target user based on the matching between news content and user interest (Wang et al., 2018; Qi et al., 2020; Wu et al., 2020a, 2019c).", "For example, Okura et al. (2017) learned news embeddings from news bodies via an auto-encoder and modeled user interests from the clicked news via a GRU network.", "The matching between news and user is formulated as the dot product of their embeddings.", "Wu et al. (2019e) used multi-head self-attention networks to generate 1 https://github.com/JulySinceAndrew/PP-Rec news content embeddings from news titles and generate user interest embeddings from clicked news.", "They also used the dot product of user and news embeddings as personalized matching scores for news ranking.", "These personalized news recommendation methods usually model user interests from previous news click behaviors.", "However, it is difficult for these methods to make accurate recommendation to cold-start users whose behaviors are very sparse (Trevisiol et al., 2014).", "These users are very common in online news platforms, making the cold-start problem become a critical issue in real systems (Sedhain et al., 2014).", "Although some methods were proposed to alleviate the cold-start problem in personalized recommendation (Sedhain et al., 2014; Trevisiol et al., 2014), they usually utilized side information (Son, 2016) such as social network (Lin et al., 2014) to enhance user interest modeling.", "However, the side information used in these methods may be unavailable in news recommendation.", "In addition, these personalized methods tend to recommend similar news with those users have already read, which makes it difficult for users to receive new news information and may hurt their news reading experience (Nguyen et al., 2014; Wu et al., 2019f).", "Different from these methods, in PP-Rec we consider not only users' personal interest in news but also the popularity of candidate news, which can alleviate both cold-start and diversity problems to some extent.", "Our work is also related to popularity-based news recommendation methods.", "Different from personalized news recommendation methods which rank candidate news based on users' personal interests, popularity-based news recommendation methods rank candidate news based on their popularity (Phelan et al., 2009; Tatar et al., 2014; Lerman and Hogg, 2010; Szabo and Huberman, 2010; Jonnalagedda et al., 2016).", "A core problem in popularity-based news recommendation methods is how to estimate the popularity of candidate news accurately.", "Most existing methods estimated news popularity based on the statistics of users' interactions with news on online news platforms, such as the number of views and comments (Yang, 2016; Tatar et al., 2014; Lee et al., 2010).", "For example, Yang (2016) proposed to use the frequency of views to measure news popularity.", "Tatar et al. (2014) proposed to predict news popularity based Figure 2: The overall framework of PP-Rec .", "on the number of comments of news via a linear model.", "Li et al. (2011) proposed to use the number of clicks on news to model their popularity and further adjust the ranking of news with same topics based on their popularity.", "However, different news usually have significant differences in impression opportunities, and these view and comment numbers are biased by impression times.", "Different from these methods, we use CTR to model news popularity, which can eliminate the impression bias.", "Besides CTR, we also incorporate the content and recency information of candidate news to predict the popularity of candidate news in a more comprehensive and time-aware manner.", "In this section, we introduce PP-Rec for news recommendation which can consider both the personal interest of users and the popularity of candidate news.", "First, we introduce the overall framework of PP-Rec , as shown in Fig.", "2. Then we introduce the details of each module in PP-Rec , which are shown in Figs.", "3, 4 and", "5. 3.1 Framework of PP-Rec In PP-Rec , the ranking score of recommending a candidate news to a target user is the combination of a personalized matching score s m and a news popularity score s p .", "The personalized matching score is used to measure the user's personal interest in the content of candidate news, and is predicted based on the relevance between news content embedding and user interest embedding.", "The news content embedding is generated by a knowledge-aware news encoder from both news texts and entities.", "The user interest embedding is generated by a Figure 3: Knowledge-aware news encoder.", "popularity-aware user encoder from the content of clicked news as well as their popularity.", "The news popularity score is used to measure the time-aware popularity of candidate news, which is predicted by a time-aware news popularity predictor based on news content, recency, and near real-time CTR.", "First, we introduce the knowledge-aware news encoder , which is shown in Fig.", "3. It learns news representation from both text and entities in news title.", "Given a news title, we obtain the word embeddings based on word embedding dictionary pre-trained on large-scale corpus to incorporate initial word-level semantic information.", "We also convert entities into embeddings based on pre-trained entity embeddings to incorporate knowledge information in knowledge graphs to our model.", "There usually exists relatedness among entities in the same news.", "For example, the entity MAC that appears with the entity Lancome may indicate cosmetics while it usually indicates computers when appears with the entity Apple.", "Thus, we utilize an entity multi-head self-attention network (Vaswani et al., 2017) (MHSA) to learn entity representations by capturing their relatedness.", "Besides, textual contexts are also informative for learning accurate entity representations.", "For example, the entity MAC usually indicates computers if its textual contexts are Why do MAC need an ARM CPU? and indicates cosmetics if its textual contexts are MAC cosmetics expands AR try-on.", "Thus, we propose an entity multi-head cross-attention network (MHCA) to learn entity representations from the textual contexts.", "Then we formulate the unified representation of each entity as the summation of its representations learned by the MHSA and MHCA networks.", "Similarly, we use a word MHSA network to learn word representations by capturing the relatedness among words and a word MHCA network to capture the relatedness between words and entities.", "Then we build the unified word representation by adding its representations generated by the word MHSA and the word MHCA networks.", "Since different entities usually contribute differently to news representation, we use an entity attention network to learn entity-based news representation e from entity representations.", "Similarly, we use a word attention network to learn word-based news representation w from word representations.", "Finally, we learn the unified news representation n with a weighted combination of e and w via an attention network.", "Next, we introduce the time-aware news popularity predictor , as shown in Fig.", "4. It is used to predict time-aware news popularity based on news content, recency, and near real-time CTR information.", "Since popular news usually have a higher click probability than unpopular news, CTR can provide good clue for popular news (Jiang, 2016).", "Thus, we incorporate CTR into news popularity prediction.", "Besides, popularity of a news article usually dynamically changes.", "Popular news may become less popular as they get out-of-date over time.", "Thus, we use user interactions in recent t hours to calculate near real-time CTR (denoted as c t ) for news popularity prediction.", "However, the accurate computation of CTR needs to accumulate Figure 4: Time-aware news popularity predictor.", "sufficient user interactions, which is challenging for those newly published news.", "Fortunately, news content is very informative for predicting news popularity.", "For example, news on breaking events such as earthquakes are usually popular since they contain important information for many of us.", "Thus, besides near real-time CTR, we incorporate news content into news popularity prediction.", "We apply a dense network to the news content embedding n to predict the content-based news popularity p c .", "Since news content is time-independent and cannot capture the dynamic change of news popularity, we incorporate news recency information, which is defined as the duration between the publish time and the prediction time.", "It can measure the freshness of news articles, which is useful for improving content-based popularity prediction.", "We quantify the news recency r in hours and use a recency embedding layer to convert the quantified news recency into an embedding vector r .", "Then we apply a dense network to r to predict the recency-aware content-based news popularity p r .", "Besides, since different news content usually have different lifecycles, we propose to model time-aware content-based news popularity p from p c and p r using a content-specific aggregator: p = p c +(1 ) p r , = ( W p [ n , r ]+ b p ) , (1) where (0 , 1) means the content-specific gate, ( ) means the sigmoid activation, [ , ] means the Figure 5: Popularity-aware user encoder.", "concatenation operation, W p and b p are the trainable parameters.", "Finally, the final time-aware news popularity s p is formulated as a weighted summation of the content-based popularity p and the CTR-based popularity c t , i.e., s p = w c c t + w p p , where w c and w p are the trainable parameters.", "Next, we introduce the popularity-aware user encoder in PP-Rec for user interest modeling, which is shown in Fig.", "5. In general, news popularity can influence users' click behaviors, and causes bias in behavior based user interest modeling (Zheng et al., 2010).", "Eliminating the popularity bias in user behaviors can help more user interest from user behaviors more accurately.", "For example, a user may click the news Justin Timberlake unveils the song because he likes the songs of Justin Timberlake, while he may click the news House of Representatives impeaches President Trump because it is popular and contains breaking information.", "Among these two behaviors, the former is more informative for modeling the user interest.", "Thus, we design a popularity-aware user encoder to learn user interest representation from both content and popularity of clicked news.", "It contains three components, which we will introduce in details.", "First, motivated by Wu et al. (2019e), we apply a news multi-head self-attention network to the representations of clicked news to capture their relatedness and learn contextual news representation.", "Second, we uniformly quantify the popularity of the i -th clicked news predicted by the time-aware news popularity predictor 2 and convert it into an embedding vector p i via popularity embedding.", "Third, besides news popularity, news content is also useful for selecting informative news to model user interest (Wu et al., 2019a).", "Thus, we propose a content-popularity joint attention network (CPJA) to alleviate popularity bias and select important clicked news for user interest modeling, which is formulated as: i = exp( q T tanh( W u [ m i , p i ])) (cid:80) Nj =1 exp( q T tanh( W u [ m j , p j ])) , (2) where i and m i denote the attention weight and the contextual news representation of the i -th clicked news respectively.", "q and W u are the trainable parameters.", "The final user interest embedding u is formulated as a weighed summation of the contextual news representations: u = (cid:80) N i =1 i m i .", "In this section, we introduce how we rank the candidate news and train the model in detail.", "The ranking score of a candidate news for a target user is based on the combination of a personalized matching score s m and a news popularity score s p .", "The former is computed based on the relevance between user embedding u and news embedding n .", "Following Okura et al. (2017), we adopt dot product to compute the relevance.", "The latter is predicted by the time-aware news popularity predictor .", "In addition, the relative importance of the personalized matching score and the news popularity score is usually different for different users.", "For example, the news popularity score is more important than the personalized matching score for cold-start users since the latter is derived from scarce behaviors and is usually inaccurate.", "Thus, we propose a personalized aggregator to combine the personalized matching score and news popularity score: s = (1 ) s m + s p , (3) where s denotes the ranking score, and the gate is computed based on the user representation u via a dense network with sigmoid activation.", "We use the BPR pairwise loss (Rendle et al., 2009) for model training.", "In addition, we adopt the negative sampling technique to select a negative 2 We remove news recency and content here to avoid nondifferentiable quantization operation.", "sample for each positive sample from the same impression.", "The loss function is formulated as: L = 1 |D| |D| (cid:88) i =1 log( ( s pi s ni )) , (4) where s pi and s ni denote the ranking scores of the i -th positive and negative sample respectively, and D denotes the training dataset.", "To our best knowledge, there is no off-the-shelf news recommendation dataset with news popularity information.", "Thus, we built two datasets by ourselves.", "The first one is collected from the user logs in the Microsoft News website from October 19 to November 15, 2019, and is denoted as MSN .", "We use the user logs in the last week for evaluation and others for model training and validation.", "The second dataset is collected from a commercial news feeds in Microsoft from January 23 to April 23, 2020, and is denoted as Feeds .", "We use the logs in the last three weeks for evaluation and the rest for model training and validation.", "For both datasets, we randomly sample 500k impressions for model training, 100k impressions for validation, and 500k impressions for evaluation, respectively.", "The detailed statistics are listed in Table", "1. Following previous works (Wu et al., 2019a; An et al., 2019), we use AUC, MRR, nDCG@5, and nDCG@10 to evaluate recommendation performance.", "In our experiments, word embeddings are 300-dimensional and initialized by the Glove embeddings (Pennington et al., 2014).", "The entity embeddings are 100-dimensional vectors pre-trained on knowledge tuples extracted from WikiData via TransE (Bordes et al., 2013).", "We use clicked and unclicked impressions in the recent one hour to compute the near real-time CTR.", "The recency and popularity embeddings are set to 100 dimensions and initialized randomly.", "All multi-head attention networks are set to have 20 attention heads and the output dimension of each head is 20.", "All gate networks are implemented by a two-layer dense network with 100-dimensional hidden vectors.", "Dropout approach (Srivastava et al., 2014) is applied to PP-Rec to migrate overfitting.", "The dropout probability is set to 0.2.", "Adam (Kingma and Ba, 2015) is used for model training with 10 4 learning rate.", "Hyper-parameters of PP-Rec and baselines are tuned based on the validation set.", "We compare PP-Rec with two groups of baselines.", "The first group is popularity-based news recommendation methods, including: (1) ViewNum (Yang, 2016): using the number of news view to measure news popularity; (2) RecentPop (Ji et al., 2020): using the number of news view in recent time to measure news popularity; (3) SCENE (Li et al., 2011): using view frequency to measure news popularity and adjusting the ranking of news with same topics based on their popularity; (4) CTR (Ji et al., 2020): using news CTR to measure news popularity.", "The second group is personalized news recommendation methods, containing: (1) EBNR (Okura et al., 2017): utilizing an auto-encoder to learn news representations and a GRU network to learn user representations; (2) DKN (Wang et al., 2018): utilizing a knowledge-aware CNN network to learn news representations from news titles and entities; (3) NAML (Wu et al., 2019a): utilizing attention network to learn news representations from news title, body and category; (4) NPA (Wu et al., 2019b): utilizing personalized attention networks to learn news and user representations; (5) NRMS (Wu et al., 2019e): utilizing multi-head self-attention networks to learn both news and user representations; (6) LSTUR (An et al., 2019): modeling users' short-term interests via the GRU network and long-term interests via the user ID; (7) KRED (Liu et al., 2020): learning news representation from titles and entities via a knowledge graph attention network.", "We repeat each experiment 5 times and show average performance and standard deviation in Table 2, from which we have the following observations.", "First, among the popularity-based news recommendation methods, the CTR method outperforms the ViewNum method.", "This is because the number of news views is influenced by impression bias while CTR can eliminate the impression bias and better measure news popularity.", "Second, PP-Rec outperforms all popularity-based methods.", "This is because these methods usually recommend popular news to different users.", "However, differ-Methods MSN Feeds AUC MRR nDCG@5 nDCG@10 AUC MRR nDCG@5 nDCG@10 ViewNum 54.12 0.00 24.95 0.00 26.07 0.00 31.56 0.00 58.99 0.00 23.71 0.00 26.83 0.00 32.38 0.00 RecentPop 55.67 0.00 28.72 0.00 30.45 0.00 36.62 0.00 56.27 0.00 24.93 0.00 28.37 0.00 33.89 0.00 SCENE 57.89 0.02 27.41 0.01 28.81 0.02 34.36 0.03 60.82 0.03 27.29 0.03 31.25 0.02 36.56 0.03 CTR 65.72 0.00 30.50 0.00 32.79 0.00 38.68 0.00 66.40 0.00 30.29 0.00 35.53 0.00 40.72 0.00 EBNR 63.90 0.20 30.13 0.12 32.25 0.14 38.05 0.14 64.88 0.04 28.91 0.03 33.29 0.03 38.87 0.02 DKN 64.16 0.19 30.63 0.10 32.98 0.12 38.66 0.11 66.30 0.11 30.25 0.06 35.01 0.07 40.55 0.06 NAML 66.06 0.17 32.10 0.10 34.73 0.11 40.43 0.11 67.50 0.09 31.07 0.08 36.08 0.10 41.61 0.10 NPA 65.83 0.20 31.70 0.09 34.24 0.10 39.96 0.10 67.25 0.10 30.80 0.05 35.72 0.07 41.25 0.07 NRMS 66.34 0.16 32.00 0.08 34.68 0.09 40.39 0.09 68.10 0.05 31.47 0.03 36.61 0.03 42.12 0.03 LSTUR 66.69 0.16 32.12 0.05 34.76 0.05 40.51 0.04 67.43 0.16 30.95 0.11 35.92 0.16 41.45 0.14 KRED 66.54 0.17 31.97 0.14 34.65 0.14 40.38 0.14 67.67 0.18 31.16 0.13 36.19 0.16 41.72 0.16 PP-Rec 71.05 0.09 39.34 0.08 44.01 0.13 50.46 0.20 72.11 0.21 32.42 0.12 38.13 0.08 43.50 0.13 Table 2: News recommendation results of different methods.", "ent users might prefer different news according to their personalized interests, some of which are not popular and cannot be recommended by these popularity-based methods.", "In contrast, HieRec considers both popularity and personalization in news recommendation.", "Third, PP-Rec outperforms all personalized methods.", "This is because personalized methods usually recommend news based on the matching between news and user interest inferred from users' clicked news, and they ignore the popularity of each news.", "However, popular news usually contain important and eye-catching information and can attract the attention of many users with different interests.", "Different from these personalized methods, PP-Rec incorporates news popularity into personalized news recommendation, which can recommend popular news to users and improve the performance of news recommendation.", "We evaluate the performance of PP-Rec and several personalized methods on news recommendation for cold-start users.", "We compare PP-Rec with NAML , KRED , LSTUR and NMRS since they achieve good performance in Table", "2. We evaluate their performance on recommending news to users with K { k | k = 0 , 1 , 3 , 5 } historical clicked news.", "In the following sections, we only show experimental results on the MSN dataset since results on MSN dataset and Feeds dataset are similar.", "As shown in Fig. 6, PP-Rec significantly outperforms other personalized methods.", "This is because these personalized methods usually recommend news based on the matching between news and user interests.", "However, it is difficult for these methods to accurately model personal interests of cold-start users from their scarce clicks and accurately help them Figure 6: Performance on cold-start users.", "find their interested news.", "Different from these methods, PP-Rec recommends news based on both personalized interest matching and news popularity.", "Popular news usually contains important information and can attract many users with different interests.", "Thus, incorporating news popularity into news recommendation can effectively improve the reading experiences of cold-start users.", "In this section, we evaluate the recommendation diversity of PP-Rec and other personalized methods.", "We use two metrics, i.e., intra-list average distance and new topic ratio, to measure the diversity of the top K ( K { k | k = 1 , ..., 10 } ) recommended news.", "The former is used to measure the average distance between recommended news based on their representations, which is widely used in previous works (Zhang and Hurley, 2008; Chen et al., 2018).", "The second one is used to measure the topic similarity between recommended news and users' historical clicked news.", "It counts the number of topics of the top K recommended news Figure 7: Intra-list average distance of news recommended by different methods.", "which are clicked and are not included in topics of users' historical clicked news.", "Besides, we use K to normalize the number.", "Fig. 7 and 8 show that PP-Rec can consistently improve the recommendation diversity.", "This is because these personalized methods recommend news to users based on the matching between news and user interest inferred from clicked news, making the recommended news tend to be similar to users' consumed news.", "Different from these methods, PP-Rec incorporates news popularity into news recommendation.", "Besides the news which is related to user interest, PP-Rec can also recommend popular news, which are very diverse in content and topics, to users.", "Thus, PP-Rec can enhance recommendation diversity.", "In this section, we conduct several ablation studies on PP-Rec .", "First, we verify the effectiveness of the two scores for candidate news ranking, i.e., news popularity score and personalized matching score, Figure 9: Effectiveness of personalized matching score and news popularity score.", "by removing them individually from PP-Rec .", "The experimental results are shown in Fig. 9.", "We have two findings from the results.", "First, after removing the news popularity score, the performance of PP-Rec declines.", "This is because PP-Rec incorporates news popularity into news recommendation via this score.", "In addition, popular news usually contains important information and can attract many users with different interests.", "Thus, recommending popular news can improve news recommendation accuracy.", "Second, removing the personalized matching score also hurts the recommendation accuracy.", "This is because this score measures user interest in news and incorporates personalized matching into news recommendation in PP-Rec .", "Since users like to click news related to their personalized interests, recommending users' interested news can effectively improve recommendation accuracy.", "Next, as shown in Fig. 10, we conduct an ablation study to verify the effectiveness of different information in the time-aware news popularity predictor by removing them individually.", "We have several observations from the results.", "First, removing news recency makes the performance of PP-Rec Figure 11: Top news recommended by PP-Rec and LSTUR .", "decline.", "This is because news popularity usually dynamically changes, and popular news will become unpopular once its information is expired.", "Since news recency can reflect the freshness of news information, incorporating it makes the news popularity modeling more accurate.", "Second, the performance of PP-Rec without news content also declines.", "This is because after removing it, PP-Rec predicts news popularity based on the near real-time CTR and recency.", "However, it usually takes some time to accumulate enough impressions to calculate accurate CTR.", "Thus, removing the news content makes PP-Rec cannot effectively model the popularity of news just published.", "Third, PP-Rec performs worse without the near real-time CTR.", "This is because near real-time CTR effectively measures the click probability of the news based on the behaviors of a large number of users in the recent period.", "Thus, removing the near real-time CTR makes it PP-Rec lose much useful information for modeling the dynamic news popularity.", "We conduct a case study to show the effectiveness of PP-Rec .", "We compare PP-Rec with LSTUR since LSTUR can achieve the best performance among baseline methods on the MSN dataset.", "In Fig. 11, we list top 3 news recommended by two methods to a randomly sampled user and their normalized popularity predicted by PP-Rec .", "We also list user's clicked news.", "First, we find that the user clicked a news on football, which is recommended by both LSTUR and PP-Rec .", "This is because the user has previously clicked three news on football, which indicates the user is interested in football.", "Thus, both LSTUR and PP-Rec recommend that news based on the personal interest of this user.", "Second, the user did not click other news on football recommended by PP-Rec and LSTUR .", "This may be because recommending too much news with similar information may make users feel bored, making the user only click a part of them.", "This inspires us that recommending news with diverse information may help improve users' reading experience.", "Third, the user clicked a news on crime, which is only recommended by PP-Rec .", "This is because it is hard to predict user's interests in criminal events from her clicks, making it difficult for LSTUR to recommend this news.", "Different from LSTUR , PP-Rec recommends news based on both personal user interest and news popularity.", "PP-Rec successfully predicts that this news is popular and recommends it.", "This case shows that PP-Rec can improve the recommendation accuracy and enhance the recommendation diversity by incorporating news popularity.", "In this paper, we propose a new news recommendation method named PP-Rec to alleviate the cold-start and diversity problems of personalized news recommendation, which can consider both the personal interest of users and the popularity of candidate news.", "In our method, we rank the candidate news based on the combination of a personalized matching score and a news popularity score.", "We propose a unified model to predict time-aware news popularity based on news content, recency, and near real-time CTR.", "In addition, we propose a knowledge-aware news encoder to generate news content embeddings from news texts and entities, and a popularity-aware user encoder to generate user interest embeddings from the content and popularity of clicked news.", "Extensive experiments on two real-world datasets constructed by logs of commercial news websites and feeds in Microsoft validate that our method can effectively improve the accuracy and diversity of news recommendation.", "This work was supported by the National Natural Science Foundation of China under Grant numbers U1936208, U1936216, U1836204, and U1705261.", "We are grateful to Xing Xie, Tao Di, and Wei He for their insightful comments and discussions." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "objective", "result", "other", "other" ]
[ "Task-oriented dialogue systems typically require manual annotation of dialogue slots in training data, which is costly to obtain.", "We propose a method that eliminates this requirement: We use weak supervision from existing linguistic annotation models to identify potential slot candidates, then automatically identify domain-relevant slots by using clustering algorithms.", "Furthermore, we use the resulting slot annotation to train a neural-network-based tagger that is able to perform slot tagging with no human intervention.", "This tagger is trained solely on the outputs of our method and thus does not rely on any labeled data.", "Our model demonstrates state-of-the-art performance in slot tagging without labeled training data on four different dialogue domains.", "Moreover, we find that slot annotations discovered by our model significantly improve the performance of an end-to-end dialogue response generation model, compared to using no slot annotation at all.", "Task-oriented dialogue systems typically use annotation based on slots to represent the meaning of user utterances (Young et al., 2013).", "Slots are attributes relevant to completing the task (e.g., price , food type , area ).", "The sets of slots and their values typically need to be designed in advance by domain experts.", "Slots and their values are tracked over the course of the dialogue, forming dialogue state, which allows a dialogue system to plan the next actions effectively (Williams et al., 2013).", "Getting raw data for dialogue system training is not difficult, especially if we restrict the target domain.", "A requirement for dialogue state labels makes this process much more costly.", "However, both traditional pipeline systems (Young et al., 2013) and end-to-end task-oriented architectures (Wen et al., 2017) typically require such annotation.", "While some systems use implicit, latent state representation and do not require annotation (Serban et al., 2016), the behavior of such systems is hard to interpret or control.", "There are several works aiming at keeping interpretability and reducing the annotation needs by automating it (Chen et al., 2014, 2015) or transferring annotation across domains (Zhao and Eskenazi, 2018; Coope et al., 2020), but they still require significant manual effort.", "In this paper, we present a novel approach to discovering a set of domain-relevant dialogue slots and their values given a set of dialogues in the target domain (such as transcripts from a call cen-ter).", "Our approach requires no manual annotation at all in order to tag slots in dialogue data.", "This substantially simplifies dialogue system design and training process, as the developer no longer needs to design a set of slots and annotate their occurrences in training data.", "We discover slots by using unsupervised clustering on top of annotation obtained by domain-independent generic models such as a semantic frame parser or a named entity recognizer (NER).", "To illustrate our approach, let us consider an example given in Figure 1. Find a chinese restaurant that's cheap.", "Although the annotation is descriptive, it contains concepts irrelevant for the domain under consideration.", "Our method selects only relevant slot candidates (depicted in blue).", "Slots discovered by our approach can then be used to design or adapt the database backend for the target domain.", "1. Selecting domain-relevant slots from candidates provided by weak supervision from domain-generic linguistic annotation tools.", "We use FrameNet-style (Fillmore, 1976) semantic frames as our main source of weak supervision.", "1 We also explore named entity recognition (NER).", "2. Training a standalone slot tagger for the selected slots.", "Based on the discovered slots, we train a slot tagger to annotate in-domain utterances.", "After it is trained, the slot tagger can be used as a standalone component it does not need the original annotation tools for prediction, and is able to improve on their results.", "3. Evaluation on multiple domains.", "We show that our approach is domain-independent.", "We achieve state-of-the-art results for slot tagging without manual supervision in four different domains, with a 6-16% absolute F1 score increase over the previous benchmark.", "4. Downstream task application.", "We evaluate our approach in a full dialogue response generation task.", "Our slots can be directly used to perform dialogue state tracking by merging annotations from consecutive turns.", "We train an end-to-end neural dialogue system using our automatically discovered slots in the restaurant domain and demonstrate that our approach improves performance over an unsupervised model, finding the correct venue in 5% more cases (35% more when no restaurant ontology is provided).", "Our experimental code is available on GitHub.", "2 1 See http://framenet.icsi.berkeley.edu/ 2 https://github.com/vojtsek/ joint-induction 2 Related Work The idea of using weak supervision to perform fine-grained language understanding based on domain-relevant (slot-like) attributes was proposed by Heck and Hakkani-Tr (2012), who construct a triple-based database of entity relations based on web search.", "They exploit the structure of in-domain web pages to obtain semantic annotations.", "There are also similar works on relation detection (Hakkani-Tr et al., 2013) or entity extraction (Wang et al., 2014).", "This approach is, however, limited by requiring structured web pages as underlying data.", "Chen et al. (2014) combine semantic frame parsing with word embeddings for weakly supervised semantic slot induction.", "Chen et al. (2015) also use semantic frames, construct lexical knowledge graphs and perform a random walk to get slot candidates.", "However, both approaches only output a ranking of potential slot candidates based on frames.", "Since frame annotation is very fine-grained, this produces a huge number of candidates, requiring their manual merging into slots for any practical use.", "In contrast, we determine domain-relevant slots automatically.", "Coope et al. (2020) focus on a few-shot setting and perform span extraction of slot values using pretrained models.", "Their approach, however, still requires some expert annotation.", "Another direction of research focuses on zero-shot slot filling.", "Bapna et al. (2017)'s recurrent-neural-network-based slot tagger is pretrained on multiple domains and takes a textual description of the target slot on the input in addition to the user utterance.", "This way, adapting to a new domain only involves providing new slot descriptions.", "Further works extend this idea with more complex architectures (Shah et al., 2019; Liu et al., 2020).", "Unsupervised and semi-supervised methods were also investigated for predicting intents (user 2432 Original annotation: Original annotation: User input 1: I would like an expensive restaurant that serves Afghan food.", "input sentence types).", "Yang et al. (2014) use semi-supervised intent clustering, with manual annotation to seed and interpret the clusters.", "Chen et al. (2016) introduced a model for zero-shot intent embedding prediction based on similarity to known intents.", "Shi et al. (2018) proposed a fully unsupervised intent detection model with the use of sentence clustering based on sentence-level features.", "Most applications of unsupervised or semi-supervised methods to end-to-end dialogue response generation avoid explicit dialogue state modeling (e.g., Serban et al., 2016; Li et al., 2016; Gao et al., 2019).", "They aim at a non-task-oriented setting, where state interpretability or response controllability are less of a concern.", "Other works in task-oriented dialogues use transfer learning for adapting to low-resourced target domains (Zhao and Eskenazi, 2018; Shalyminov et al., 2019), but also keep the dialogue state representation latent.", "In contrast, Jin et al. (2018) propose to model the dialogue state explicitly, in a semi-supervised way.", "They extend the end-to-end encoder-decoder Sequicity model of Lei et al. (2018, cf. Section 4) by introducing an additional decoder that has access to posterior information about the system response.", "This allows them to train a state representation with a reconstruction loss on unsupervised examples, using the state as a limited memory for essential concepts (roughly corresponding to slots).", "Their method can be applied in fully unsupervised way, but it still requires some amount of in-domain annotations to achieve good performance.", "Our work aims at explicit dialogue state modeling without the need for any in-domain supervision.", "Our slot discovery method has three main stages: (1) We obtain weak supervision labels from automatic", "automatic domain-generic annotation.", "(2) We identify domain-relevant slots based on the annotation labels by iteratively", "(a) merging and", "(b) ranking and selecting most viable candidates (Section 3.2).", "(3) we use the discovered slots to train an independent slot tagger (Section 3.3).", "Figure 2 shows the overall data flow of our slot annotation pipeline.", "The data are first labeled with domain-generic linguistic annotation models, which we consider weak supervision.", "For our experiments, we use a frame semantic parser and NER, but other models, such as semantic role labeling (SRL; e.g., Palmer et al., 2010) or keyword extraction (e.g., Hulth, 2003) can be used in general.", "We use a simple union of labels provided by all annotation models.", "3 3.2 Discovering Slots: Merging and Ranking Subsequent steps identify domain-relevant slots based on candidates provided by the automatic annotation.", "The slot discovery process is iterative in each iteration, it: (1) merges similar candidates, (2) ranks candidates' relevance and eliminates irrelevant ones.", "Once no more frames are eliminated, the process stops and we obtain slot labels, which are used to train a slot tagger (see Section 3.3).", "We refer to the automatically tagged tokens as (slot) fillers , and the tags are considered slot candidates.", "We use generic precomputed word embeddings as word representation in both steps.", "We further compute slot embeddings ( ) for each distinct slot as word embedding averages over 3 If the same token is labeled multiple times by different annotation sources, both labels are considered candidates and are very likely to be merged.", "If multiple labels remain after the merging and ranking process, only the first label is kept, the rest are discarded.", "all respective slot fillers, weighted proportionally by filler frequency.", "The slot embeddings need to be re-computed after each iteration due to the merging step.", "We will now describe the individual steps.", "Since automatic annotation may have a very fine granularity, 4 entities/objects of the same type are often captured by multiple slot candidates.", "With a frame parser, for instance, the frames Direction and Location both relate to the concept of area .", "We thus need to merge similar 1 . . . under a single candidate.", "We measure similarity of slots 1 , 2 as: sim ( 1 , 2 ) = sim ( ( 1 ) , ( 2 )) + sim ctx ( 1 , 2 ) where sim is a cosine similarity and sim ctx ( 1 , 2 ) is a normalized number of occurrences of 1 and 2 with the same dependency relation.", "If the similarity exceeds a pre-set threshold sim , the candidates are merged into one.", "The main goal of this step is to remove irrelevant slot candidates and select the viable ones only.", "We hypothesize that different slots are likely to occur in different contexts (e.g., addresses are requested more often than stated by the user).", "To preserve relevant slots that only occur in rarer contexts, we cluster the data according to verb-slot pairs.", "We then rank candidates within each cluster (see details below).", "We consider candidates with a score higher than -fraction of a given cluster mean to be relevant and select them for the next rounds.", "If a slot candidate is selected in at least one of the clusters, it is considered viable overall.", "Clustering the data We process the data with a generic SRL tagger.", "Each occurrence of a filler is thus associated with a head verb whose semantic argument the corresponding word is, if such exists.", "We then compute embeddings of the formed verb-filler pairs as average of the respective token embeddings.", "The pairs are then clustered using agglomerative (bottom-up) hierarchical clustering with average linkage according to cosine distance of their embeddings.", "5 The process stops when a predetermined number of clusters is reached.", "4 This is indeed the case for frame-semantic annotation, which we mostly use in our experiments in Section 5. Annotation types that have fewer label types could be further distinguished by e.g. adding the head verb from syntactic parsing, or using word classes/word clustering over the fillers.", "5 Note that fillers for the same slot candidate may end up in multiple clusters.", "This does not mean that the respective Candidate Ranking criteria We use the following metrics to compute the ranking score: 6 Frequency frq ( ) is used since candidates that occur frequently in the data are likely important.", "Coherence coh ( ) is the average pairwise similarity of all fillers' embeddings: coh ( ) = (cid:213) ( , ) 2 cos ( ( ) , ( )) | 2 | (1) where 2 is a set of all pairs of fillers for the slot candidate s .", "We follow Chen et al. (2014)'s assumption that fillers with high coherence, i.e., focused on one topic, are good slot candidates.", "TextRank (Mihalcea and Tarau, 2004) is a keyword extraction algorithm.", "It constructs a graph where nodes represent words and edges represent their co-occurrence.", "The dominant eigenvector of the adjacency matrix of this graph then gives the individual words' scores.", "We replace fillers with candidate labels when computing the score, so we obtain results related to slots rather than to particular values.", "Our method described in Section 3.2 can give us a good set of dialogue slots.", "However, using the merged and filtered slots directly may result in low recall since the original annotation models used as weak supervision are not adapted to our specific domain.", "Therefore, we use the obtained labels to train a new, domain-specific slot tagger to improve performance.", "The tagger has no access to better labels than those derived by our method; however, it has a simpler task, as the set of target labels is now much smaller and the domain is much narrower.", "We model the slot tagging task as sequence tagging, using a convolutional neural network that takes wordand character-based embeddings of the tokens as the input and produces a sequence of respective tags (Lample et al., 2016).", "7 The output layer of the tagger network gives softmax probability distributions over possible tags.", "To further increase recall, we add an inference-time rule if slot candidate is split it is just ranked for relevance multiple times (with respect to multiple contexts).", "6 Usefulness of the individual metrics is confirmed in an ablation study in Section 6. 7 https://github.com/deepmipt/ner 2434 the most probable predicted tag is O' (i.e., no slot) and the second most probable tag has a probability higher than a preset threshold tag , the second tag is chosen as a prediction instead.", "As we discuss in Section 6, this threshold is crucial for achieving substantial recall improvement.", "To improve the robustness of our model, we only use 10% of the original in-domain training set (with labels from Section 3.1) to train the slot tagger model.", "The rest of the training set is used for a grid search to determine model hyperparameters (hid-den layer size, dropout rate and tag threshold).", "We choose the parameters that yield the best F1 score when compared against the automatic slot discovery results (i.e., no manual annotation is needed here, the aim is at good generalization).", "To verify the usefulness of the labels discovered by our method, we use them to train and evaluate an end-to-end task-oriented dialogue system.", "We choose Sequicity (Lei et al., 2018) for our experiments, an LSTM-based encoder-decoder model that uses a system of copy nets and two-stage decoding.", "First, it decodes the dialogue state, so the database can be queried externally.", "In the subsequent step, Sequicity generates the system response conditioned on the belief state and database results.", "This architecture works with a flat representation of the dialogue state, i.e. the state is represented as a sequence of tokens slot values.", "The default Sequicity model uses gold-standard dialogue state annotation.", "However, a compatible state representation is directly obtainable from our labels, simply by concatenating the labels aggregated in each turn from user utterances.", "Whenever a new value for a slot is found in user input by our tagger, it is either appended to the state representation, or it replaces a previous value of the same slot.", "This artificial supervision thus allows us to provide a learning signal to the Sequicity model even without manually labeled examples.", "We evaluate our approach to slot discovery by comparing the resulting slot labels to gold-standard supervised slot annotation.", "Additionally, we evaluate the structure of clusters created during the selection process (Section 3.2.2) by comparing it to gold-standard user intents.", "We also test the usefulness of our labels in a full dialogue response generation setup (Section 4), where we compare to gold-standard dialogue tracking labels.", "We use the following datasets for our experiments: CamRest676 ( CR ) (Wen et al., 2017) has 676 dialogues, 2,744 user utterances, 4 tracked slots and 2 intents in the restaurant domain.", "MultiWOZ (Budzianowski et al., 2018; Eric et al., 2020) is a multi-domain corpus; we picked two domains hotel reservation and attraction recommendation to form WOZ-hotel ( WH ) with 14,435 utterances, 9 slots, 3 intents and WOZ-attr ( WA ) with 7524 utterances, 8 slots and 3 intents respectively.", "8 Cambridge SLU (Henderson et al., 2012) ( CS ) contains 10,569 utterances and tracks 5 slots with 5 intents in the restaurant domain.", "ATIS ( AT ) (Hemphill et al., 1990) contains 4,978 utterances with 79 slots and 17 intents in the flights domain.", "9 As sources of weak supervision providing slot candidates, we mainly use the frame semantic parsers SEMAFOR (Das et al., 2010) and open-sesame (Swayamdipta et al., 2017) a union of labels provided by both parsers is used in all our setups.", "In addition, to explore combined sources on the named-entity-heavy ATIS dataset, we include a generic convolutional NER model provided by SpaCy.", "10 To provide features for slot candidate merging and selection, we use AllenNLP (Gardner et al., 2017) for SRL and FastText (Bojanowski et al., 2017) as pretrained word embeddings.", "Slot merging and selection parameters were set heuristically in an initial trial run on the CamRest676 data and proved stable across domains.", "Slot tagger hyperparameters are chosen according to grid search on a portion of the training data, as described in Section 3.3.", "11 5.2 System Variants and Baselines We test multiple ablation variants of our method: Ours-full is the full version of our method (full annotation setup and trained slot tagger).", "8 MultiWOZ contains more domains such as restaurant, train search, bus search .", "However, we decided to not include these as they are nearly identical to the other domains we use.", "9 We used the ATIS data version from https://www.kaggle.com/siddhadev/atis-dataset-from-ms-cntk .", "10 https://spacy.io 11 Training details are included in Appendix C. 2435 method / dataset CR CS WH WA AT Tag-supervised 0 .", "Ours-nothr does not use the recall-increasing second-candidate rule in the slot tagger (cf. Section 3.3).", "Ours-notag excludes the slot tagger, directly using the output of our merging and selection step.", "Ours-nocl further excludes the clustering step; slot candidate ranking and selection is performed over all candidates together (cf. Section 3.2.2).", "We also compare to previous work of Chen et al. (2014), 12 which is similar to Ours-nocl , but it does not merge similar frames and uses different ranking criteria.", "To put our results into perspective, we also include two supervised models for comparison: Tag-supervised is the same model that we use as our slot tagger (see Section 3.3), but it is trained on supervised data.", "Dict-supervised uses a simple dictionary of labels obtained from the training data.", "As an intrinsic evaluation of the verb-slot pair clusters formed for slot ranking in Section 3.2.2, we compare to gold-standard intent annotation with respect to the following baselines: (1) a majority baseline (assigning the most frequent intent class to all instances), and (2) a simple method that represents the utterances as averages of respective word embeddings and performs sentence-level intent clustering.", "All the slots in a given utterance are then assumed to have the same intent.", "The dialogue generation task is evaluated by comparing to Jin et al. (2018)'s approach introduced in Section 2. We run their model in a fully unsupervised way, i.e. we provide no labeled examples during the training phase, to give a fair comparison against our model.", "To provide more perspective, we also show a supervised variant of Jin et al. (2018)'s model, where gold-standard slot labels are provided.", "For evaluation, we construct a handcrafted reference mapping between our discovered slots and the respective ground-truth slots and intents.", "The mapping is domain-specific, but it is very easy to construct even for an untrained person the process takes less than 10 minutes for each of our domains.", "It amounts to matching slots from the domain ontology against slots output by our approach, which are represented by FrameNet labels.", "Most importantly, the mapping is only needed for evaluation , not by our method itself.", "We provide an example mapping in Appendix B. We use the following evaluation metrics: Slot F1 score : To reflect slot tagging performance, we measure precision, recall, and F1 for every slot individually.", "An average is then computed from slot-level scores, weighted by the number of slot occurrences in the data.", "We measure slot F1 both on standalone user utterances (slot tagging) and in the context of a dialogue system (dialogue tracking).", "Slot-level Average Precision (AP) .", "The slot candidates picking task is a ranking problem and we use the average precision metric following Chen et al. (2014).", "Considering a ranked list of discovered slots = 1 , . . . , , . . . , we compute AP: ( ) = (cid:205) = 1 @ ( ) 1 # mapped slots (2) where 1 is an indicator function that equals one if slot has a reference mapping defined and @ ( ) is precision at of the ranked list .", "Slot Rand Index (RI) is a clustering metric, used to evaluate slot candidate merging.", "RI is the proportion of pairs of slot candidates that are correctly assigned into the same or into different 2436 method CR CS WH WA AT Chen et al. 0 .", "(cf. Section 5.3) on selected datasets, comparing our approach ( Ours ) with a random baseline ( Rnd ).", "slots (following the reference mapping).", "13 Normalized Mutual Information (NMI) is the mutual information between two clusterings normalized into the (0, 1) interval.", "Thanks to the normalization, it is suitable for comparing two clusterings with different numbers of clusters.", "Intent Accuracy is the percentage of slot occurrences assigned into the correct intent cluster under the reference mapping (see Section 5.2).", "Dialogue Joint Goal Accuracy calculates the proportion of dialogue turns where all user constraints (i.e., dialogue state summarizing slot values) are captured correctly (Mrkic et al., 2017).", "Dialogue Entity Match Rate calculates the last turn's entity in each dialogue.", "It verifies if a correct entity would be retrieved from the database using the final constraints (Wen et al., 2017).", "For slot tagging and ranking evaluation, we sampled a random data order 50 times and performed 5-fold cross-validation for each permutation.", "For the dialogue generation evaluation, we trained the models 100 times and used averaged results.", "All results are given with 95% confidence intervals.", "We first evaluate the main task of slot tagging and include a manual error analysis, then present detailed results for subtasks (slot candidate ranking and merging) and additional tasks (intent clustering and full response generation).", "13 We compute RI on a union of labels that have a ground-truth slot mapping and all labels selected by our method.", "Labels without ground-truth mapping are assumed to form single-item pseudo-slots.", "Slot tagging is evaluated in Table 1. Ours-full (slot selection + trained tagger) outperforms all other approaches by a large margin, especially in terms of recall.", "The performance cannot match the supervised models, but it is not far off in some domains.", "14 Chen et al. (2014)'s method has a slightly higher precision, but our recall is much higher than theirs (see Appendix A.1).", "Note that Chen et al. (2014) do not reduce the set of candidates, they only rank them so that a manual cutoff can be made.", "In contrast, our method reduces the set of candidates significantly.", "A comparison between Ours-notag and Ours-full shows that applying the slot tagger improves both precision and recall.", "Tagger without the threshold decision rule ( Ours-nothr ) mostly performs better than the parser; however, using the threshold is essential to improve recall.", "Experiments on ATIS with NER as an additional source of annotation proved that our method can benefit from it.", "As discussed above, the use of the trained tagging model is crucial to improve the recall of our method.", "In Figure 4, we compare the results with and without the tagger.", "We change the value of prediction threshold and measure the number of cases in which the tagging model encounters more true positives, false positives or false negatives, respectively.", "As the results show, lowering the threshold increases the number of cases in which the tagger finds more correct slot values (and therefore improves recall), while it does not affect the number of false positives much (and therefore retains precision).", "Error analysis: We conducted a manual error analysis of slot tagging to gain more insight about the output quality and sources of errors.", "In general, we found that the tagger can generalize and capture unseen values (cf. Figure 3).", "One source of errors is the relatively low recall of the frame-semantic parsers used.", "We successfully address this issue by introducing the slot tagger, however, many slot values remain untagged.", "This is expected as our method's performance is inherently limited by the input linguistic annotation quality.", "Another type of errors is caused by the can-14 Note that our measurements of slot F1 only consider the O' tag as negative (the average is computed over slots only).", "This results in lower numbers than those reported in literature (cf. e.g. Goo et al., 2018), but we believe that this reflects the actual performance more accurately.", "15 We present results taken in unsupervised setting, i.e. when no ontology is available.", "However, since Jin et al. (2018) consider only slot values that are known from the ontology by default, we provide the extended results in Appendix A.2.", "didate merging procedure (see also below).", "Due to frequent co-occurrence, it might happen that two semantically unrelated candidates are merged and therefore some tokens are wrongly included as respective slot fillers.", "Nevertheless, the merging step is required in order to obtain a reasonable number of slots for a dialogue domain.", "Our approach does leave some room for improvements, especially regarding the consistency of results across different slots, which can be imbal-anced.", "For instance, on the WOZ-hotel data, we observe a difference of up to 0.5 F1 score among individual slots (see Appendix A.2).", "Slot candidate ranking results are given in Table 2. Our pipeline significantly outperforms Chen et al. (2014)'s approach on 4 out of 5 datasets.", "We can also see that the slot-verb pairs clustering step is important in the ablation experiment where we do not perform clustering ( Ours-nocl ), performance falls dramatically on the WOZ-hotel, WOZ-attr and ATIS data.", "This is because without the clustering step, a large number of context-irrelevant slot candidates is considered, hurting performance.", "In addition, we include a detailed evaluation of the contribution of the individual slot candidate ranking scores described in Section 3.2.2.", "Results in Table 6 suggest that all of our proposed scores improve the performance.", "Slot merging evaluation is shown in Table 3. Although candidates in the CamRest676 data are merged into slots reasonably well, other datasets show a relatively low performance.", "The low RI scores are a result of errors in candidate ranking, which wrongly assigned high ranks to some rare, irrelevant candidates.", "These candidates do not appear in the reference mapping and are assumed to form singular pseudo-slots.", "However, they are typically joined with similar candidates in the merging process.", "This leads to many pairs of candidates that are merged into one slot by our approach but appear separately in the reference mapping.", "Nevertheless, this behavior barely influences slot tagging performance as the candidates are rare.", "Clustering evaluation: Table 5 suggests that our clustering performs better than simple baselines and can potentially yield useful results if used for intent detection.", "Nevertheless, intent detection is more complex and presumably requires more features and information about the dialogue context, which we reserve for future work.", "The complexity is also suggested by the fact that the naive embedding clustering performs worse than the majority baseline in 4 out of 5 cases.", "Dialogue response generation: We explore the influence that our labels have on sequence-to-sequence dialogue response generation in an experiment on the CamRest676 data (see Table 4).", "We can see that our method provides helpful slot labels that improve dialogue state tracking performance.", "Compared to Jin et al. (2018)'s system used in a fully unsupervised setting, our approach shows significant improvements in all metrics.", "We achieve better results than Jin et al. (2018)'s system especially with respect to entity match rate, suggesting that our model can provide consistent labels throughout the whole dialogue.", "To make a fair comparison, we further evaluate Jin et al. (2018)'s system in a setting in which it can learn 2438 Figure 4: The comparison of outputs of our tagger and the parser.", "from the labels provided directly by weak supervision (i.e., the frame-semantic parser, not filtered by our pipeline).", "We observe an improvement in terms of entity match rate, but it does not match the improvement achieved with our filtered labels.", "Surprisingly, slot F1 and joint goal accuracy even decrease slightly, which suggests that label quality is important and the noisy labels obtained directly from weak supervision are not useful enough.", "We present a novel approach for weakly supervised natural language understanding in dialogue systems that discovers domain-relevant slots and tags them in a standalone fashion.", "Our method removes the need for annotated training data by using off-the-shelf linguistic annotation models.", "Experiments on five datasets in four domains mark a significant improvement in intrinsic NLU performance over previous weakly supervised approaches; in particular, we vastly improve the slot recall.", "The usefulness of slots discovered by our method is further confirmed in a full dialogue response generation application.", "Code used for our experiments is available on GitHub.", "16 A drawback of our approach is the reliance on existing linguistic annotation models.", "We show that the method is able to combine multiple annotation sources and create a tagger that functions as a standalone component, generalizing better than the original annotation and thus lowering this dependency.", "Nevertheless, the results are still somewhat limited by the input annotation structure and quality.", "In future, we plan to further improve the model by unsupervised selection of slot candidates via keyword extraction and clustering, as well as by taking context information from preceding dialogue turns into account.", "We also want to focus more on the intent detection aspect of our work.", "This work was supported by Charles University grants PRIMUS 19/SCI/10, GAUK 302120, and SVV 260 575.", "We also want to thank Jindrich Li-bovick and David Marecek for helpful comments on the draft, and the anonymous reviewers for their remarks that helped us further improve the paper." ]
[ "abstain", "objective", "result", "method", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "objective", "method", "objective", "objective", "method", "objective", "objective", "result", "result", "objective", "method", "method", "objective", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "other", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "result", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "result", "abstain", "objective", "method", "result", "method", "method", "abstain", "result", "abstain", "result", "method", "other", "other" ]
[ "Document-level event factuality identification is an important subtask in event factuality and is crucial for discourse understanding in Natural Language Processing (NLP).", "Previous studies mainly suffer from the scarcity of suitable corpus and effective methods.", "To solve these two issues, we first construct a corpus annotated with both documentand sentence-level event factuality information on both English and Chinese texts.", "Then we present an LSTM neural network based on adversarial training with both intraand inter-sequence attentions to identify document-level event factuality.", "Experimental results show that our neural network model can outperform various baselines on the constructed corpus.", "Document-level event factuality identification is the task of deciding the commitment of relevant sources towards the factual nature of an event, and to determine whether an event is a fact, a possibility, or an impossible situation from the view of document.", "Identifying document-level factuality of events requires comprehensive understanding of documents.", "As illustrated in Figure 1 where events are in bold , the event reach (including its other forms) have various factuality values in different sentences.", "For example, in paragraph 2, reach is impossible/CTaccording to the negative word denied , while in paragraph 3, reach is possible/PS+ due to the speculative word may .", "The main contents of this document is Mexico denied that they will reach an agreement with the U.S. on the new trade deal , and the document-level factuality of the event reach is CT-.", "Document-level event factuality identification is fundamental for document-level NLP applications, such as machine reading comprehension, which aims to have machines read a text passage According to Politico.com, it is said the United States will reach (CT+) an agreement with Mexico on the new trade deal that will replace North American Free Trade Agreement (NAFTA) before December, 2017.", "However, Mexican Economy Minister Ildefonso Guajardo denied that they plan to reach (CT-) any agreement with the U.S. on the trade deal talks.", "We are not going to sacrifice the quality of an agreement because of pressure of time. We will keep engaged. he said.", "Just two days ago, Guajardo said the two sides may reach (PS+) an agreement within hours.", "The government has not been informed that any agreement will be reached (CT-) yet, said another two Mexican officials.Duringthe past few weeks, the U.S. has been negotiating with Mexico on the new trade deal and has achieved much progress.", "Thus, some media speculate that they will possibly reach (PS+) an agreement.", "But now it seems that the negotiations will continue before they can get a good deal.", "and then answer questions about the text.", "According to the document in Figure 1, the answer of the following question should be No , which is consistent with the document-level factuality of the event reach (CT-): Q : Does the U.S. reach an agreement with Mexico on the new trade deal before December 2017?", "A : No.", "Previous studies mostly reported on sentence-level event factuality identification tasks.", "On one hand, due to the scarcity of document-level event factuality corpus, these studies only considered the corpora annotated with sentence-level event factuality information, such as ACE 2005 1 , LU (Diab et al., 2009), FactBank (Saur and Pustejovsky, 2009), and UDS-IH2 (Rudinger et al., 2018).", "On the other hand, previous studies only con-1 https://catalog.ldc.upenn.edu/LDC2006T06 sidered information within sentences, using rules (Saur, 2008; Saur and Pustejovsky, 2012), machine learning models (de Marneffe et al., 2012; Werner et al., 2015; Baly et al., 2018), and combinations of them (Qian et al., 2015; Stanovsky et al., 2017) for modeling.", "Neural network models have also recently been used for the sentence-level event factuality identification (He et al., 2017; Rudinger et al., 2018; Qian et al., 2018).", "According to Figure 1, document-level event factuality can not be deduced from each sentence-level factuality separately, but depends on the comprehensive semantic information of sentences.", "However, no suitable model for document-level task has been proposed yet.", "To solve the issues above, this paper focuses on document-level event factuality identification.", "Our contributions can be summarized as follows.", "1) We construct a document-level event factuality corpus, i.e. DLEF, on both English and Chinese texts.", "To our best knowledge, this is the first document-level event factuality corpus.", "The statistics on the corpora and the experimental results show that our corpus can sufficiently reflect linguistic characteristics of news texts, and provide adequate support on resource for research.", "2) We propose an LSTM neural network with both intraand inter-sequence attentions to identify document-level event factuality, and consider dependency paths from speculative and negative cues to the event and sentences containing the event as features.", "Due to the diversity of various contents of the texts in DLEF corpus, we employ Adversarial Training to improve the robustness of our model.", "Experimental results show that our model is superior to various baselines.", "The corpus and code of this paper will be released at https://github.com/qz011/dlef .", "This section introduces our D ocumentL evel E vent F actuality (DLEF) corpus, including the source, detailed guidelines for both documentand sentence-level event factuality, and the main statistics of the corpus.", "News texts contain sufficient speculative and negative information that is significant for event factuality identification, and usually focus on one event with a specific topic.", "Moreover, FactBank (Saur + u CT CT+ CT-CTu PS PS+ PS-(NA) U (NA) (NA) Uu Table 1: Event factuality values.", "and Pustejovsky, 2009), the sentence-level event factuality corpus, is also based on news texts.", "Therefore, we choose news texts in both English and Chinese to construct our corpus.", "The English corpus consists of 1727 documents from January 2017 to January 2018, among which 1506 documents are from China Daily 2 , and 221 documents are from Sina Bilingual News 3 .", "The Chinese corpus consists of 4649 documents from Sina News 4 .", "These news documents cover various topics, e.g., politics, economy, culture, military, and society, which can reflect the heterogeneity of language in news texts.", "Saur (2008) employed modality and polarity to describe event factuality values.", "Modality conveys the certainty degree of events, such as certain (CT), probable (PR), and possible (PS), while polarity expresses whether the event happened, including positive (+) and negative (-).", "We use the factuality values in Table 1 according to Saur (2008).", "Both PR and PS are speculative values and share similar certainty degrees in our corpus, and are merged into PS .", "U/u means underspecified .", "PSu and U+/are not applicable (NA) and are not considered.", "Although CTu is applicable, neither document-level nor sentence-level event can be annotated as CTu in our corpus.", "We adopt the definition of events proposed by TimeML (Pustejovsky et al., 2003) and consider the events that can be critical for computing the factuality.", "To ensure that the task is meaningful, we focus on the events that have various types of sentence-level factuality values.", "If there is more than one suitable event in a document, we annotate them separately.", "First, the annotation of document-level event factuality is based on the definition, i.e., determining the factuality of an event from the view of 2 http://www.chinadaily.com.cn 3 http://roll.edu.sina.com.cn/english/syxw/index.shtml 4 http://news.sina.com.cn the document requires to understand the semantic of the document, including various sentence-level event factuality.", "Second, sentence-level event factuality is essential for document-level task, which makes sense when documentand sentence-level factuality of events have different values.", "Therefore, we annotate the sentence-level event factuality as follows: CTevents are negated by negative cues.", "For example, the events enter and merger are governed by negative cues impossible and denied in sentence S1 and S2, respectively.", "(S2)", "Sinopec responded to National Business Daily, and denied the rumors of a merger with PetroChina.", "PS+ events (e.g. improve and fallen ) are governed by speculative cues (e.g., impossible and denied ), just as illustrated in sentence S4 and S5.", "(S4)", "We think that further investigation may help to improve the treatment of people with similar infections.", "PSevents are governed by both speculative and negative cues.", "Different from CT-, PSmeans incompletely negation.", "For example, the PS-event noticed is governed by the speculative cue probably and the negative cue not in sentence S6, and fall is modified by the cues may and not in sentence S7.", "(S6)", "The bus driver had probably not noticed the truck early enough.", "(S7)", "Oil prices may not fall sharply due to the strong global demand.", "Uu events can appear in questions (e.g., considering in sentence S8) and in the intensional contexts with underspecified semantics (e.g., raises in sentence S9): (S8) Is France considering to leave EU?", "(S9)", "The US dollar's declination can not be reversed even if the Federal Reserve raises rates three times.", "The task is trivial if most documents have only one type of sentence-level factuality value, and in this", "case, document-level factuality probably shares the same value.", "To understand the usefulness of document-level event factuality identification and DLEF corpus, we launched the statistics of documents with n different types of sentence-level event factuality values shown in Table 2.", "From the table we can find that for English corpus there are 41.94% CTand 66.06% PS+ documents with different sentence-level event factuality values, but these CT+ documents only cover 11.13%.", "While for Chinese corpus, these CTand PS+ documents cover 63.41% and 62.15%, but these CT+ documents only make up 14.23%.", "Table 2 indicates that sentence-level factuality usually agrees with document-level factuality in CT+ documents, making them straightforward to be identified.", "However, in those non-CT+ documents with non-factual document-level values, sentence-level factuality is likely to have different values from documents, making them more difficult to be identified.", "In general, English and Chinese corpus have 25.64% and 37.84% documents with different sentence-level event factuality values, indicating this corpus is suitable for the document-level event factuality identification.", "Table 3 shows the statistics of the DLEF corpus.", "CT+ document-level events are in the majority, because information reported by news texts is usually real.", "Kappa (Cohen, 1960) is employed to measure the inter-annotator agreement of annotating documentand sentence-level event factuality between the two independent annotators who annotate the entire corpus, just as shown in Table 4.", "These two annotators are postgraduate stu-Corpus Statistics English Documents CT-279/16.16% PS+ 274/15.87% PS-12/0.69% Uu 12/0.69% CT+ 1150/66.59% Total 1727 Sentence-LevelEvents CT-662/11.52% PS+ 574/9.99% PS-37/6.44% Uu 71/1.24% CT+ 4401/76.61% Total 5745 Avg.", "dents who major in NLP.", "In addition, the Kappa of events on English and Chinese corpus are 0.83 and 0.85, respectively.", "All the Kappa values are larger than 0.75, proving the effectiveness and meaningfulness of our DLEF corpus.", "This section describes the LSTM neural network for document-level event factuality identification in detail.", "As shown in Figure 2, to extract feature representations of events from the view of documents, we consider both intraand inter-sequence attention for dependency paths and sentences.", "In addition, due to the diversity of contents of doc-softmax ( W 1 h e + b 1 ) Sentences ( S 0 , S 1 , , S j1 ) output Input Layer Softmax Layer Dependency Syntactic Paths ( P 0 , P 1 , , P i -1 ) LSTM_1 (Intra-Sequence) Embedding Layer LSTM Layer InterSequence Attention Layer P 0 P 1 P 2 P i1 S 0 S 1 S 2 S 3 S j1 0 1 2 i1 LSTM_2 (Intra-Sequence)0 1 2 3 j -1 h e T T h sp h ss Figure 2: Neural network architecture for document-level event factuality identification.", "uments in DLEF corpus, we consider adversarial training to ensure the robustness of our model.", "For our task, we use the specified events that have been annotated, and utilize the Chinese cues in CNeUn corpus (Zou et al., 2015) and the English cues in BioScope corpus (Vincze et al., 2008) that also considers multi-word cues, e.g., rule out .", "We do not use any annotated sentence-level event factuality.", "For one event, we consider all the sentences containing it, and mainly employ the following two features in our model:", "1) Syntactic Features : Previous studies (Saur and Pustejovsky, 2012; de Marneffe et al., 2012) have proved the effectiveness of dependency trees on event factuality identification tasks.", "Hence, we employ the dependency paths from speculative or negative cues to the event as syntactic features.", "In addition, we also consider the above features in contexts of each sentence containing the event as the input, and set the windows size as 3 , i.e., one sentence before and after the current one.", "If adjacent sentences contain speculative or negative cues, the dependency path is the concatenation of the path from the cue to the root and the path from the root to the event (Quirk and Poon, 2017).", "A dependency path or sentence can be represented as X 0 according to the embedding table.", "We employ LSTM with hidden units n h to model the sequences from both directions to produce the for-ward hidden sequence H , the backward hidden sequence H , and the output sequence H = H + H .", "We adopt the attention mechanism to capture the most important information from H , and obtain the output h : H m = tanh( H ) (1) = softmax( v TH ) (2) h = tanh( H T ) (3) where v R n h is the parameter.", "One event can have k sequences X 0 , X 1 , . . . , X k 1 , whose representation is H s = h 0 , h 1 , . . . , h k 1 according to the above equations, where H s R k n h .", "To extract the feature representation h s R n h from the k sequences, we utilize an inter-sequence attention mechanism that is computed as: H ms = tanh( H s ) (4) s = softmax( v Ts H ms ) (5) h s = tanh( H s Ts ) (6) where v s R n h is the parameter.", "Suppose that an event has i dependency paths P 0 , P 1 , . . . , P i 1 , and appears in j sentences S 0 , S 1 , . . . , S j 1 .", "Considering that dependency paths and sentences contain syntactic and semantic information, respectively, we employ two LSTM neural networks defined above to learn vector representations h sp and h ss of dependency paths and sentences, and concatenate them into the feature representation of the event h e : h e = h sp h ss (7) where is the concatenation operator.", "o = softmax( W 1 h e + b 1 )", "where W 1 R c dim( h e ) and b 1 R c are parameters, and c = 5 is the number of categories of factuality values (CT+, CT-, PS+, PS-, Uu).", "The objective function of the proposed neural network is designed as: LD ( ) = 1 m m 1 (cid:88) i =0 log p ( y ( i ) j | x ( i ) , ) (9) where y ( i ) is the golden label of the instance x ( i ) and p ( y ( i ) j | x ( i ) ) is the probability, m is the number of instances, and is the parameter set to learn.", "This model with TWO attention layers is denoted as Att 2 in the next section.", "As described in Section 2, documents in DLEF corpus cover various topics.", "To improve the robustness of our model, we consider Adversarial Training.", "Similar to previous work (Miyato et al., 2016; Wu et al., 2017), we add a small adversarial perturbation e adv to word embeddings, and employ the following objective function: L adv ( X | ) = L ( X + e adv | ) (10) e adv = arg max (cid:107) e (cid:107) (cid:54) (cid:15) L ( X + e | ) (11) where is a fixed copy value of the current and X is the input.", "Due to the intractable nature in the computation of Eq.", "(11), Goodfellow et al. (2014) proposed Eq.", "(12) to linear L ( X | ) near X to approximate Eq.", "(11): e adv = (cid:15) g / (cid:107) g (cid:107) (12) g = TL ( X | ) (13) where T is the embedding table.", "We introduce the experimental settings and the baselines, finally presenting the experimental results and analysis in detail.", "The PSand Uu documents only cover 1.39% and 1.20% in our English and Chinese corpus, respectively.", "Therefore, we mainly focus on the performance of CT+, CT-, and PS+.", "For fair comparison, we perform 10-fold cross-validation on English and Chinese corpora, respectively.", "In addition to Precision, Recall, and F1-measure for each category of factuality value, we consider macroand micro-averaging to obtain the overall performance of all the categories of factuality values.", "The hidden units of LSTM are set as n h = 50 .", "We initialize word embed-dings via Word2Vec (Mikolov et al., 2013), setting the dimensions as d 0 = 100 , and fine-tuning them during training.", "SGD with momentum is applied to optimize our models.", "Att 2 and Att 2+AT are the models proposed in Section 3 that consider the contexts, i.e., one sentence before and after the current sentence containing the event as the input.", "Compared to Att 2, Att 2+AT considers Adversarial Training (AT, the same below).", "We also consider the following baselines for the comparison with our models: MaxEntVote is a maximum entropy model that only considers the view of AUTHOR (de Marneffe et al., 2012).", "We use maximum entropy model to identify sentence-level event factuality, and consider voting mechanism, i.e., choose the value committed by the most sentences as the document-level factuality value.", "We also consider other machine learning models, e.g. Lee et al. (2015), but obtain lower micro-/macro-averaged F1 on English (59.38/33.36) and Chinese corpus (53.91/43.20).", "SentVote identifies sentence-level event factuality, and does not consider inter-sequence attention in the model proposed in Section 3.", "Similar to MaxEntVote model, voting mechanism is used to identify document-level event factuality in this SentVote model.", "MP 2 considers Max-Pooling instead of attention compared with Att 2.", "Att 1 considers only intra-sequence attention, but not the inter-sequence attention.", "For an event, we concatenate its i dependency paths and j sentences into one path and one sentence as the input, respectively.", "Table 5 presents the performances of our models and baselines.", "MaxEntVote gives relatively lower results than other models, especially on CT-and PS+.", "SentVote models are better than MaxEntVote, but still obtain lower results than Att 2, which can prove that inter-sequence attention is more useful than voting.", "Max-pooling only selects the most active information for each dimension of features, While attention takes into account all the features and assigns weights for them according their degrees of importance.", "Hence, Att 2 gets better results than MP L2.", "Att 1 only considers the intra-sequence attention and obtains lower results than Att 2, which proves the effectiveness of inter-sequence attention.", "Att 2 and Att 2+AT achieve better results than other baselines.", "Compared to Att 2, Att 2+AT considers the adversarial perturbation and training that can alleviate over-fitting.", "Therefore, Att 2+AT is superior to Att 2, which can prove the effectiveness of adversarial training.", "On both English and Chinese corpora, the performance of CT+ is better than those of PS+ and CT-.", "On one hand, it is easier to identify CT+ documents due to their majority.", "On the other hand, most news texts hardly contain bogus and false contents.", "Therefore, in most CT+ documents, sentence-level factuality values are consistent with the document-level value, just as in S10.", "However, in PS+ and CTdocuments with non-CT+ document-level values, sentence-level factuality values have different viewpoints with the corresponding document, varying among CT-, PS+, and CT+, making the task more difficult, e.g., S11.", "(S10)", "India successfully tested (CT+) a supersonic missile, capable of destroying an incoming ballistic missile at low altitude.", "...... The test (CT+) was carried out from a test range in Odisha , offi-cial sources said.", "(S11)", "Argentine navy said it had not contacted (CT-) the SAN Juan submarine.", "... ...", "Some media previously said the navy may have received signals from the submarine and contacted (PS+) it.", "For Att 2+AT, we also investigate the effects of contexts of the sentences containing events as the input on the performance.", "The results is given in Table 6, which shows that contexts can improve the performance more significantly on the Chinese corpus than the English corpus.", "We find that in the Chinese corpus these sentences are commonly in the same paragraph and have a strong semantic coherence.", "Therefore, information in adjacent sentences can contribute to the identification of the document-level factuality of the events in the current sentences.", "Sentences S12 and S13 are adjacent sentences in one paragraph.", "The document-level factuality value of the event provided in S12 is CT-.", "However, the sentence-level value of provided is PS+.", "If we consider S13, the negative cue denied can lead to the correct document-level factuality value of provided .", "While in the English corpus, similar sentences are much fewer, because paragraphs in most English news texts only contain one or two sentences, and sentences in different paragraphs share less semantic correlation Corpus Systems CT-PS+ CT+ Micro-A Macro-A English MaxEntVote 58.17 35.89 75.14 68.42 56.40 SentVote 70.22 57.85 83.98 78.06 70.68 MP 2 70.57 56.39 83.72 77.65 70.23 Att 1 65.25 53.65 79.18 73.23 66.03 Att 2 73.88 59.29 88.59 81.84 73.92 Att 2+AT 76.87 62.14 89.84 83.56 76.28 Chinese MaxEntVote 62.44 58.29 72.22 67.72 64.32 SentVote 72.66 58.39 80.68 74.70 70.58 MP 2 74.34 65.17 78.91 75.22 72.81 Att 1 68.82 49.78 81.89 71.12 67.28 Att 2 81.41 73.35 86.58 82.79 80.45 Att 2+AT 83.35 74.06 87.52 84.03 81.64 Table 5: F1-measures of baselines and our model.", "than those in the same paragraph.", "Hence, performance improvement is less when considering adjacent sentences in the English corpus.", "(S12)", "( It is doubted that the Mexican government provided vantage points for the enterprises involved during the bidding process. ) (S13) 7", "( The Mexican Foreign Ministry responded and denied the rumor on 7th. )", "If we consider more adjacent sentences, e.g., two sentences before and after the current sentence, however, the results will be a bit lower.", "The micro-/macro-averaged F1 on English and Chinese corpus are 81.20/75.65 and 82.57/80.91, respectively.", "We think the reason is that some sentences are far away from the current sentence and have little effect on the current event, and considering more contexts may also lead to overfitting.", "Moreover, we explore the effects of considering only dependency path (Dpath) and only sentence (Sent) in Table 6.", "Att 2+AT achieves the best results when considering both paths and sentences as input, proving that both of them are effective features for our model.", "Att 2+AT obtains higher performance with only sentences than only paths as input, meaning that Att 2+AT is mainly benefi-cial from sentences that can offer semantic information.", "Error analysis shows that documents with incorrect identified values contains sentences with more speculative or negative cues: (S14) When asked if it might be arson , authorities said that no fire raiser has been found now, but the possibility of artificial arson should not been ruled out.", "S14 contains speculative cues if , might , possibility and negative cues no , not , ruled out .", "It is difficult to identify whether the events are governed by the cues when only considering the dependency paths and ignoring the semantic information offered by sentences.", "S14 can demonstrate the importance of semantic features.", "As mentioned in Section 2.4, the document-level task becomes trivial if most documents have only one category of sentence-level factuality value that", "is the same as document-level value.", "Table 7 shows the performance of Att 2+AT on the documents with n different types of sentence-level factuality values.", "The microand macro-averaged F1 of n (cid:62) 2 are lower than those of n =1, indicating that the factuality of documents that have different types of sentence-level factuality are more difficult to identify due to the interference from sentence-level values.", "We notice that in the Chinese corpus, the performance of CTis much higher than that of PS+ and CT+ when n (cid:62) 2.", "According to the analysis on the Chinese corpus, we find that most CTdocuments are usually used to deny the rumors, i.e., those sentence-level events whose factuality values are not CT-.", "Therefore, the sentence-level CT-events are often in the topic sentences of the documents and dominate among sentences, which can contribute to the better results of document-level CTevents in Chinese corpus.", "Because document-level event factuality is related with sentence-level factuality information, we also consider the joint optimization model for them.", "For sentence-level task, we use the LSTM neural network in Section 3 and only consider the current sentence, i.e., do not consider information in adjacent sentences and the inter-sequence attention layer.", "The objective of documentand sentence-level task are denoted as LD ( ) and LS ( ) , and the objective of our joint optimization model is: LJ ( ) = L D ( ) + (1 ) LS ( ) (14) where =0.6 is the trade-off.", "The performance of both sentence-level and document-level event factuality identification is shown in Table 8.", "The micro-/macro-averaged F1 of joint optimization model on English and Chinese corpus are 82.89/75.64 and 83.83/81.48, respectively.", "Although document-level event factuality is based on the factuality information in sentences, sentence-level factuality value of an event only depends on the current sentence, and is likely to have a different value compared to the current document-level factuality.", "Therefore, the joint model can not improve the performance of document-level task.", "Researchers have studied document-level tasks in many NLP applications, e.g., sentiment analysis (Xu et al., 2016; Dou, 2017), named entity recognition (Luo et al., 2018), and machine translation (Born et al., 2017).", "But related studies on event factuality are limited to the sentence-level task.", "Diab et al. (2009) and Prabhakaran et al. (2010) presented studies of belief annotation and tagging, and classified predicate events into Committed Belief (CB), Non-CB or Not Applicable using a supervised framework.", "For factuality assessment, Lee et al. (2015) employed dependency features, while Stanovsky et al. (2017) considered deep linguistic information, such as modality classes, syntactic re-ordering with PropS tree annotation structure (Lotan et al., 2013).", "Baly et al. (2018) considered a set of features and predicted the factuality of reporting and bias of news media.", "Saur (2008) and Saur and Pustejovsky (2012) proposed a rule-based model to identify event factuality on FactBank.", "de Marneffe et al. (2012) used a machine learning model and Qian et al. (2015) utilized a two-step framework combining machine learning and rule-based approaches on FactBank.", "In addition to FactBank, Prabhakaran et al. (2015) proposed a ongoing framework for a larger corpus based on LU, and Cao et al. (2013) constructed a Chinese corpus annotated with event factuality based on ACE 2005.", "However, no previous work annotated a document-level corpus.", "We construct DLEF corpus with document-level event factuality for the first time.", "Some studies focused on document-level event identification task.", "Choubey et al. (2018) designed a rule-based classifier to identify central events according to event coreference relations.", "Liu et al. (2018) utilized a kernel-based neural model that captured semantic relations between discourse units for event salience identification.", "However, they did not consider the document-level event factuality.", "To our best knowledge, this paper is the first work on document-level event factuality identification task.", "Previous studies (He et al., 2017; Rudinger et al., 2018; Qian et al., 2018) have tried neural network models on sentence-level factuality identification.", "Recent research has shown that neural networks with multi-level attention can extract meaningful information from heterogeneous input and improve the performance of NLP tasks, e.g., discourse relation (Liu and Li, 2016), relation classification (Wang et al., 2016), and question answering (Yu et al., 2017).", "Moreover, to improve the robustness of neural networks, related studies considered adversarial perturbation and training on text classification (Miyato et al., 2016) and relation extraction (Wu et al., 2017).", "This paper is in line in proposing an adversarial neural network with both intraand inter-sequence attention.", "We investigated document-level event factuality identification task by constructing a corpus annotated with documentand sentence-level event factuality based on both English and Chinese texts.", "To identify document-level event factuality, we proposed an LSTM neural network with both intraand inter-sequence attention, and consider adversarial training to improve the robustness.", "Experimental results showed that document-level event identification on our DLEF corpus is useful, and our adversarial training model outperforms several baselines.", "To our knowledge, this is the first paper for the document-level event factuality identification.", "In the future work, we will consider to detect events and their sentence-level and document-level factuality with a joint framework, and we will also continue to expand the scale of our DLEF corpus.", "The authors would like to thank the three anonymous reviewers for their comments on this paper.", "This work was partially supported by national Natural Science Foundation of China (NSFC) via Grant Nos. 61836007, 6177235, 61773276, 61673290." ]
[ "abstain", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "method", "objective", "result", "objective", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "objective", "objective", "result", "objective", "method", "other", "other" ]
[ "Recently, many methods discover effective evidence from reliable sources by appropriate neural networks for explainable claim verification, which has been widely recognized.", "However, in these methods, the discovery process of evidence is nontransparent and unexplained.", "Simultaneously, the discovered evidence only roughly aims at the interpretability of the whole sequence of claims but insufficient to focus on the false parts of claims.", "In this paper, we propose a Decision Tree-based Co-Attention model (DTCA) to discover evidence for explainable claim verification.", "Specifi-cally, we first construct Decision Tree-based Evidence model (DTE) to select comments with high credibility as evidence in a transparent and interpretable way.", "Then we design Co-attention Self-attention networks (CaSa) to make the selected evidence interact with claims, which is for 1) training DTE to determine the optimal decision thresholds and obtain more powerful evidence; and 2) utilizing the evidence to find the false parts in the claim.", "Experiments on two public datasets, RumourEval and PHEME, demonstrate that DTCA not only provides explanations for the results of claim verification but also achieves the state-of-the-art performance, boosting the F1-score by 3.11%, 2.41%, respectively.", "The increasing popularity of social media has brought unprecedented challenges to the ecology of information dissemination, causing rampancy of a large volume of false or unverified claims, like extreme news, hoaxes, rumors, fake news, etc.", "Research indicates that during the US presidential election (2016), fake news accounts for nearly 6% of all news consumption, where 1% of users are exposed to 80% of fake news, and 0.1% of users are responsible for sharing 80% of fake news (Grinberg et al., 2019), and democratic elections are vulnerable to manipulation of the false or unverified claims on social media (Aral and Eckles, 2019), which renders the automatic verification of claims a crucial problem.", "Currently, the methods for automatic claim verification could be divided into two categories: the first is that the methods relying on deep neural networks learn credibility indicators from claim content and auxiliary relevant articles or comments (i.e., responses) (Volkova et al., 2017; Rashkin et al., 2017; Dungs et al., 2018).", "Despite their effectiveness, these methods are difficult to explain why claims are true or false in practice.", "To overcome the weakness, a trend in recent studies (the second category) is to endeavor to explore evidence-based verification solutions, which focuses on capturing the fragments of evidence obtained from reliable sources by appropriate neural networks (Popat et al., 2018; Hanselowski et al., 2018; Ma et al., 2019; Nie et al., 2019).", "For instance, Thorne et al. (2018) build multi-task learning to extract evidence from Wikipedia and synthesize information from multiple documents to verify claims.", "Popat et al. (2018) capture signals from external evidence articles and model joint interactions between various factors, like the context of a claim and trustworthiness of sources of related articles, for assessment of claims.", "Ma et al. (2019) propose hierarchical attention networks to learn sentence-level evidence from claims and their related articles based on coherence modeling and natural language inference for claim verification.", "Although these methods provide evidence to solve the explainability of claim verification in a manner, there are still several limitations.", "First , they are generally hard to interpret the discovery process of evidence for claims, namely, lack the interpretability of methods themselves because these methods are all based on neural networks, belong-1025 ing to nontransparent black box models.", "Secondly , the provided evidence only offers a coarse-grained explanation to claims.", "They are all aimed at the interpretability of the whole sequence of claims but insufficient to focus on the false parts of claims.", "To address the above problems, we design D ecision T ree-based C oA ttention networks (DTCA) to discover evidence for explainable claim verification, which contains two stages: 1) D ecision T ree-based E vidence model (DTE) for discovering evidence in a transparent and interpretable way; and 2) C oa ttention S elfa ttention networks (CaSa) using the evidence to explore the false parts of claims.", "Specifically, DTE is constructed on the basis of structured and hierarchical comments (aiming at the claim), which considers many factors as decision conditions from the perspective of content and meta data of comments and selects high credibility comments as evidence.", "CaSa exploits the selected evidence to interact with claims at the deep semantic level, which is for two roles: one is to train DTE to pursue the optimal decision threshold and finally obtain more powerful evidence; and another is to utilize the evidence to find the false parts in claims.", "Experimental results reveal that DTCA not only achieves the state-of-the-art performance but also provides the interpretability of results of claim verification and the interpretability of selection process of evidence.", "Our contributions are summarized as follows: We propose a transparent and interpretable scheme that incorporates decision tree model into co-attention networks, which not only discovers evidence for explainable claim verification (Section 4.4.3) but also provides interpretation for the discovery process of evidence through the decision conditions (Section 4.4.2).", "Designed co-attention networks promote the deep semantic interaction between evidence and claims, which can train DTE to obtain more powerful evidence and effectively focus on the false parts of claims (Section 4.4.3).", "Experiments on two public, widely used fake news datasets demonstrate that our DTCA achieves more excellent performance than previous state-of-the-art methods (Section 4.3.2).", "Claim Verification Many studies on claim verification generally extract an appreciable quantity", "of credibility-indicative features around semantics (Ma et al., 2018b; Khattar et al., 2019; Wu et al., 2020), emotions (Ajao et al., 2019), stances (Ma et al., 2018a; Kochkina et al., 2018; Wu et al., 2019), write styles (Potthast et al., 2018; Grondahl and Asokan, 2019), and source credibility (Popat et al., 2018; Baly et al., 2018a) from claims and relevant articles (or comments).", "For a concrete instance, Wu et al. (2019) devise sifted multi-task learning networks to jointly train stance detection and fake news detection tasks for effectively utilizing common features of the two tasks to improve the task performance.", "Despite reliable performance, these methods for claim verification are unexplainable.", "To address this issue, recent research concentrates on the discovery of evidence for explainable claim verification, which mainly designs different deep models to exploit semantic matching (Nie et al., 2019; Zhou et al., 2019), semantic conflicts (Baly et al., 2018b; Dvorak and Woltran, 2019; Wu and Rao, 2020), and semantic entailments (Hanselowski et al., 2018; Ma et al., 2019) between claims and relevant articles.", "For instance, Nie et al. (2019) develop neural semantic matching networks that encode, align, and match the semantics of two text sequences to capture evidence for verifying claims.", "Combined with the pros of recent studies, we exert to perceive explainable evidence through semantic interaction for claim verification.", "Explainable Machine Learning Our work is also related to explainable machine learning, which can be generally divided into two categories: intrinsic explainability and post-hoc explainability (Du et al., 2018).", "Intrinsic explainability (Shu et al., 2019; He et al., 2015; Zhang and Chen, 2018) is achieved by constructing self-explanatory models that incorporate explainability directly into their structures, which requires to build fully interpretable models for clearly expressing the explainable process.", "However, the current deep learning models belong to black box models, which are difficult to achieve intrinsic explainability (Gunning, 2017).", "Post-hoc explainability (Samek et al., 2017; Wang et al., 2018; Chen et al., 2018) needs to design a second model to provide explanations for an existing model.", "For example, Wang et al. (2018) combine the strengths of the embeddings-based model and the tree-based model to develop explainable recommendation, where the tree-based model obtains evidence and the embeddings-based model (cid:1860) (cid:2869) (cid:1860) (cid:2869) (cid:1860) (cid:2870) (cid:1860) (cid:2870) (cid:1860) (cid:3041) (cid:1860) (cid:3041) (cid:1860) (cid:2869) (cid:4593) (cid:1860) (cid:2869)(cid:4593) (cid:1860) (cid:2870)(cid:4593) (cid:1860) (cid:2870) (cid:4593) (cid:1860) (cid:3040)(cid:4593) (cid:1860) (cid:3040)(cid:4593) Self-Attention E |E-C| E (cid:1768) C C Softmax Comments Claim Sequence Representation Layer EC Output Layer V KQ Q KV Feed-Forward Feed-Forward Pooling (cid:3400) 6 Decision Tree-based Evidence Model (DTE) Co-attention Self-attention Networks (CaSa ) Co-Attention Layer (cid:485) (cid:485) (cid:485) (cid:485) Self-Attention (cid:3400) 6 Figure 1: The architecture of DTCA.", "improves the performance of recommendation.", "In this paper, following the post-hoc explainability, we harness decision-tree model to explain the discovery process of evidence and design co-attention networks to boost the task performance.", "In this section, we introduce the decision tree-based co-attention networks (DTCA) for explainable claim verification, with architecture shown in Figure 1, which involves two stages: decision tree-based evidence model (DTE) and co-attention self-attention networks (CaSa) that consist of a 3-level hierarchical structure, i.e., sequence representation layer, co-attention layer, and output layer.", "Next, we describe each part of DTCA in detail.", "DTE is based on tree comments (including replies) aiming at one claim.", "We first build a tree network based on hierarchical comments, as shown in the left of Figure 2.", "The root node is one claim and the second level nodes and below are users' comments on the claim ( R 11 , ... , R kn ), where k and n denote the depth of tree comments and the width of the last level respectively.", "We try to select comments with high credibility as evidence of the claim, so we need to evaluate the credibility of each node (comment) in the network and decide whether to select the comment or not.", "Three factors from the perspective of content and meta data of comments Decision Tree (cid:1876) (cid:2869) (cid:3407) (cid:1853) (cid:2870) (cid:1876) (cid:2870) (cid:3407) (cid:1853) (cid:2870) yes no yes no (cid:1829) (cid:1844) (cid:2869)(cid:2869) (cid:1844) (cid:2869)(cid:3039) (cid:1844) (cid:2870)(cid:2869) (cid:1844) (cid:2870)(cid:2870) (cid:1844) (cid:2870)(cid:3040) (cid:1844) (cid:3038)(cid:2869) (cid:1844) (cid:3038)(cid:2870) (cid:1844) (cid:3038)(cid:3036) (cid:1844) (cid:3038)(cid:3041) (cid:485) (cid:485) (cid:485) (cid:485) (cid:485) yes no (cid:1876) (cid:2871) (cid:3407) (cid:1853) (cid:2871) Tree Comments Figure 2: Overview of DTE.", "The semantic similarity between comments and claims.", "It measures relevancy between comments and claims and aims to filter irrelevant and noisy comments.", "Specifically, we adopt soft consine measure (Sidorov et al., 2014) between average word embeddings of both claims and comments as semantic similarity.", "The credibility of reviewers 1 .", "It follows that re-viewers with high credibility also usually have high reliability in their comments (Shan, 2016).", "Specifically, we utilize multiple meta-data features of reviewers to evaluate reviewer credibility, i.e., whether the following elements exist or not: verified, geo, screen name, and profile image; and the number of the items: followers, friends, and favorites.", "The examples are shown in Appendix A. The credibility of comments.", "It is based on meta data of comments to roughly measure the credibility of comments (Shu et al., 2017), i.e., 1) whether the following elements exist or not: geo, source, favorite the comment; and 2) the number of favorites and content-length.", "The examples are shown in Appendix A. In order to integrate these factors in a transparent and interpretable way, we build a decision tree model which takes the factors as decision conditions to measure node credibility of tree comments, as shown in the grey part in Figure 2.", "We represent the structure of a decision tree model as Q = { V, E } , where V and E denote nodes and edges, respectively.", "Nodes in V have two types: decision (a.k.a. internal) nodes and leaf nodes.", "Each decision node splits a decision condition x i (one of the three factors) with two decision edges (decision results) based on the specific decision threshold a i .", "The leaf node gives the decision result (the red circle), i.e., whether the comment is 1 People who post comments 1027 selected or not.", "In our experiments, if any decision nodes are yes, the evaluated comment in the tree comment network will be selected as a piece of evidence.", "In this way, each comment is selected as evidence, which is transparent and interpretable, i.e., interpreted by decision conditions.", "When comment nodes in the tree network are evaluated by the decision tree model, we leverage post-pruning algorithm to select comment subtrees as evidence set for CaSa (in section 3.2) training.", "In DTE, the decision threshold a i is uncertain, to say, according to different decision thresholds, there are different numbers of comments as evidence for CaSa training.", "In order to train decision thresholds in DTE so as to obtain more powerful evidence, and then exploit this evidence to explore the false parts of fake news, we devise CaSa to promote the interaction between evidence and claims.", "The details of DTCA are as follows: 3.2.1 Sequence Representation Layer The inputs of CaSa include a sequence of evidence (the evidence set obtained by DTE model is concatenated into a sequence of evidence) and a sequence of claim.", "Given a sequence of length l tokens X = { x 1 , x 2 , ..., x l } , X R l d , which could be either a claim or the evidence, each token x i R d is a d -dimensional vector obtained by pre-trained BERT model (Devlin et al., 2019).", "We encode each token into a fixed-sized hidden vector h i and then obtain the sequence representation for a claim X c and evidence X e via two BiLSTM (Graves et al., 2005) neural networks respectively.", "h i = LSTM ( h i 1 , x i ) (1) h i = LSTM ( h i+1 , x i ) (2) h i = [ h i ; h i ] (3) where h i R h and h i R h are hidden states of forward LSTM LSTM and backward LSTM LSTM .", "h is the number of hidden units of LSTM.", "; denotes concatenation operation.", "Finally, R e R l 2 h and R c R l 2 h are representations of sequences of both evidence and a claim.", "Additionally, experiments confirm BiLSTM in CaSa can be replaced by BiGRU (Cho et al., 2014) for comparable performance.", "Co-attention networks are composed of two hierarchical self-attention networks.", "In our paper, the sequence of evidence first leverages one self-attention network to conduct deep semantic interaction with the claim for capturing the false parts of the claim.", "Then semantics of the interacted claim focus on semantics of the sequence of evidence via another self-attention network for concentrating on the key parts of the evidence.", "The two self-attention networks are both based on the multi-head attention mechanism (Vaswani et al., 2017).", "Given a matrix of l query vectors Q R l 2 h , keys K R l 2 h , and values V R l 2 h , the scaled dot-product attention, the core of self-attention networks, is described as Attention ( Q , K , V ) = softmax ( QKT d ) V (4) Particularly, to enable claim and evidence to interact more directly and effectively, in the first self-attention network, Q = R e pool ( R e pool R 2 h ) is the max-pooled vector of the sequence representation of evidence, and K = V = R c , R c is the sequence representation of claim.", "In the second self-attention network, Q = C , i.e., the output vector of self-attention network for claim (the details are in Eq. 7), and K = V = R e , R e is the sequence representation of evidence.", "To get high parallelizability of attention, multihead attention first linearly projects queries, keys, and values j times by different linear projections and then j projections perform the scaled dot-product attention in parallel.", "Finally, these results of attention are concatenated and once again projected to get the new representation.", "Formally, the multi-head attention can be formulated as: head i = Attention ( QW Qi , KW Ki , VW Vi ) (5) O (cid:2) = MultiHead ( Q , K , V ) = [head 1 ; head 2 ; ... ; head j ] W o (6) where W Qi R 2 h D , W Ki R 2 h D , W Vi R 2 h D , and W o R 2 h 2 h are trainable parameters and D is 2 h/j .", "Subsequently, co-attention networks pass a feed forward network (FFN) for adding non-linear features while scale-invariant features, which contains a single hidden layer with an ReLU.", "where W 1 , W 2 , b 1 , and b 2 are the learned parameters.", "O = C and O = E are output vectors of two 1028 self-attention networks aiming at the claim and the evidence, respectively.", "Finally, to fully integrate evidence and claim, we adopt the absolute difference and element-wise product to fuse the vectors E and C (Wu et al., 2019).", "As the key contribution of this work is to verify claims accurately and offer evidence as explanations, we design experiments to answer the following questions:", "To evaluate our proposed model, we use two widely used datasets, i.e., RumourEval (Derczynski et al., 2017) and PHEME (Zubiaga et al., 2016).", "Structure.", "Both datasets respectively contain 325 and 6,425 Twitter conversation threads associated with different newsworthy events like Charlie Hebdo, the shooting in Ottawa, etc.", "A thread consists of a claim and a tree of comments (a.k.a. responses) expressing their opinion towards the claim.", "Labels.", "Both datasets have the same labels, i.e., true, false, and unverified.", "Since our goal is to verify whether a claim is true or false, we filter out unverified tweets.", "Table 1 gives statistics of the two datasets.", "In consideration of the imbalance label distributions, besides accuracy (A), we add precision (P), recall (R) and F1-score (F1) as evaluation metrics for DTCA and baselines.", "We divide the two datasets into training, validation, and testing subsets with proportion of 70%, 10%, and 20% respectively.", "We turn all hyper-parameters on the validation set and achieve the best performance via a small grid search.", "For hyper-parameter configurations, (1) in DTE, the change range of semantic similarity, the credibility of reviewers, and the credibility of comments respectively belong to [0, 0.8], [0, 0.8], and [0, 0.7]; (2) in CaSa, word embedding size d is set to 768; the size of LSTM hidden states h is 120; attention heads and blocks are 6 and 4 respectively; the dropout of multi-head attention is set to 0.8; the initial learning rate is set to 0.001; the dropout rate is 0.5; and the mini-batch size is 64.", "SVM (Derczynski et al., 2017) is used to detect fake news based on manually extracted features.", "CNN (Chen et al., 2017) adopts different window sizes to obtain semantic features similar to n-grams for rumor classification.", "TE (Guacho et al., 2018) creates article-by-article graphs relying on tensor decomposition with deriving article embeddings for rumor detection.", "DeClarE (Popat et al., 2018) presents attention networks to aggregate signals from external evidence articles for claim verification.", "TRNN (Ma et al., 2018b) proposes two tree-structured RNN models based on top-down and down-top integrating semantics of structure and content to detect rumors.", "In this work, we adopt the top-down model with better results as the baseline.", "MTL-LSTM (Kochkina et al., 2018) jointly trains rumor detection, claim verification, and stance detection tasks, and learns correlations among these tasks for task learning.", "Bayesian-DL (Zhang et al., 2019) uses Bayesian to represent the uncertainty of prediction of the veracity of claims and then encodes responses to update the posterior representations.", "Sifted-MTL (Wu et al., 2019) is a sifted multitask learning model that trains jointly fake news detection and stance detection tasks and adopts gate and attention mechanism to screen shared features.", "compared models on the two datasets.", "We observe that: SVM integrating semantics from claim content and comments outperforms traditional neural networks only capturing semantics from claim content, like CNN and TE, with at least 4.75% and 6.96% boost in accuracy on the two datasets respectively, which indicates that semantics of comments are helpful for claim verification.", "On the whole, most neural network models with semantic interaction between comments and claims, such as TRNN and Bayesian-DL, achieve from 4.77% to 9.53% improvements in accuracy on the two datasets than SVM without any interaction, which reveals the effectiveness of the interaction between comments and claims.", "TRNN, Bayesian-DL, and DTCA enable claims and comments to interact, but the first two models get the worse performance than DTCA (at least 1.06% and 1.19% degradation in accuracy respectively).", "That is because they integrate all comments indiscriminately and might introduce some noise into their models, while DTCA picks more valuable comments by DTE.", "at most 3.29% and 1.86% boosts in recall than DTCA on the two datasets respectively, but they also bring out noise, which achieve from 1.06% to 21.11% reduction than DTCA in the other three metrics.", "Besides, DTCA achieves 3.11% and 2.41% boosts than the latest baseline (sifted-MTL) in F1-score on the two datasets respectively.", "These elaborate the effectiveness of DTCA.", "In Section 4.3, we find that the use of comments can improve the performance of models.", "To further investigate the quantitative impact of comments on our model, we evaluate the performance of DTCA and CaSa with 0%, 50%, and 100% comments.", "The experimental results are shown in Table 3.", "We gain the following observations: Models without comment features present the lowest performance, decreasing from 5.08% to 9.76% in accuracy on the two datasets, which implies that there are a large number of veracity-indicative features in comments.", "As the proportion of comments expands, the performance of models is improved continuously.", "However, the rate of comments for CaSa raises from 50% to 100%, the boost is not significant, only achieving 1.44% boosts in accuracy on RumourEval, while DTCA obtains better performance, reflecting 3.90% and 3.28% boosts in accuracy on the two datasets, which fully proves that DTCA can choose valuable comments and ignore unimportant comments with the help of DTE.", "To answer RQ2, we analyze the changes of model performance under different decision conditions.", "Different decision conditions can choose different comments as evidence to participate in the model learning.", "According to the performance change of the model verification, we are capable of well explaining the process of evidence selection through decision conditions.", "Specifically, we measure different values (interval [0, 1]) as thresholds of decision conditions so that DTE could screen different comments.", "Figure", "3(a),", "(b), and", "(c) respectively present the influence of semantic similarity ( simi ), the credibility of reviewers ( r cred ), and the credibility of comments ( c cred ) on the performance of DTCA, where the maximum thresholds are set to 0.7, 0.7, and 0.6 respectively because there are few comments when the decision threshold is greater than these values.", "We observe that: When simi is less than 0.4, the model is continually improved, where the average performance improvement is about 2 % (broken lines) on the two datasets when simi increases by 0.1.", "Especially, DTCA earns the best performance when simi is set to 0.5 ( < 0.5), while it is difficult to improve performance after that.", "These exemplify that DTCA can provide more credibility features under appropriate semantic similarity for verification.", "DTCA continues to improve with the increase of r cred , which is in our commonsense, i.e., the more authoritative people are, the more credible their speech is.", "Analogously, DTCA boosts with the increase of c cred .", "These show the reasonability of the terms of both the credibility of reviewers and comments built by meta data.", "When simi is set to 0.5 ( < 0.5), r cred is 0.7 ( < 0.7), c cred is 0.6 ( < 0.6), DTCA wins the biggest improvements, i.e., at least 3.43%, 2.28%, and 2.41% on the two datasets respectively.", "At this moment, we infer that comments captured by the model contain the most powerful evidence for claim verification.", "This is, the optimal evidence is formed under the conditions of moderate semantic similarity, high reviewer credibility, and higher comment credibility, which explains the selection process of evidence.", "To answer RQ3, we visualize comments (evidence) captured by DTE and the key semantics learned by CaSa when the training of DTCA is optimized.", "Figure 4 depicts the results based on a specific sample in PHEME, where at the comment level, red arrows represent the captured evidence and grey arrows denote the unused comments; at the word level, darker shades indicate higher weights given to the corresponding words, representing higher attention.", "We observe that: In line with the optimization of DTCA, the comments finally captured by DTE contain abundant evidence that questions the claim, like presumably Muslim? How do you arrive at that?', but it should be confirmed 1st before speculating on air.', and false flag', to prove the falsity of the claim (the label of the claim is false), which indicates that DTCA can effectively discover evidence to explain the results of claim verification.", "Additionally, there are common characteristics in captured comments, i.e., moderate semantic similarity (interval [0.40, 0.49]), high reviewer credibility (over 0.66), and high comment credibility (over 0.62).", "For instance, the values of the three characteristics of evidence @TimGrif-fiths85 where was that reported? #dickface' are 0.47, 0.66, and 0.71 respectively.", "These phenomena explain that DTCA can give reasonable explanations to the captured evidence by decision conditions of DTE, which visually reflects the interpretability of DTCA method itself.", "At the word level, the evidence-related words presumably Muslim', made fun of', shooter', and isn't Islamist' in comments receive higher weights than the evidence-independent words surprised', confirmed 1st' and speculating', which illustrates that DTCA can earn the key semantics of evidence.", "Moreover, weekly Charlie Hebdo' in the claim and Islamist' and Muham-mad' in comments are closely focused, which is related to the background knowledge, i.e., weekly Charlie Hebdo is a French satirical comic magazine which often publishes bold satire on religion and politics.", "report say' in claim is queried in the comments, like How do you arrive at that?' and false flag'.", "These visually demonstrate that DTCA can uncover the questionable and even false parts in claims.", "Table 4 provides the performance of DTCA under different claims with different number of comments.", "We observe that DTCA achieves the satisfactory performance in claims with more than 8 Claims Datasets A (%) P (%) R (%) F1 (%) Claims with less RumourEval 73.42 68.21 73.50 70.76 than 3 comments PHEME 74.10 72.45 75.32 73.86 Claims with comRumourEval 75.33 69.16 75.07 71.99 ments [3, 8] PHEME 77.26 74.67 79.03 76.79 Claims with more RumourEval 80.25 75.61 83.45 79.34 than 8 comments PHEME 80.36 75.52 84.33 79.68 Table 4: The performance comparison of DTCA under different claims with different number of comments.", "comments, while in claims with less than 8 comments, DTCA does not perform well, underper-forming its best performance by at least 4.92% and 3.10% in accuracy on the two datasets respectively.", "Two reasons might explain the issue: 1) The claim with few comments has limited attention, and its false parts are hard to be found by the public; 2) DTCA is capable of capturing worthwhile semantics from multiple comments, but it is not suitable for verifying claims with fewer comments.", "We proposed a novel framework combining decision tree and neural attention networks to explore a transparent and interpretable way to discover evidence for explainable claim verification, which constructed decision tree model to select comments with high credibility as evidence, and then designed co-attention networks to make the evidence and claims interact with each other for unearthing the false parts of claims.", "Results on two public datasets demonstrated the effectiveness and explainability of this framework.", "In the future, we will extend the proposed framework by considering more context (meta data) information, such as time, storylines, and comment sentiment, to further enrich our ex-1032 plainability.", "The research work is supported by National Key Research and Development Program in China (2019YFB2102300); The World-Class Universities (Disciplines) and the Characteristic Development Guidance Funds for the Central Universities of China (PY3A022); Ministry of Education Fund Projects (18JZD022 and 2017B00030); Shenzhen Science and Technology Project (JCYJ20180306170836595); Basic Scien-tific Research Operating Expenses of Central Universities (ZDYF2017006); Xi'an Navinfo", "Corp.& Engineering Center of Xi'an Intelligence Spatial-temporal Data Analysis Project (C2020103); Beilin District of Xi'an Science & Technology Project (GX1803).", "We would like to thank the anonymous reviewers for their insightful comments." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "method", "method", "other", "other", "other", "other", "other", "method", "other", "abstain", "other", "objective", "other", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "abstain", "objective", "other", "other", "other" ]
[ "Most of the existing models for document-level machine translation adopt dual-encoder structures.", "The representation of the source sentences and the document-level contexts 1 are modeled with two separate encoders.", "Although these models can make use of the document-level contexts, they do not fully model the interaction between the contexts and the source sentences, and can not directly adapt to the recent pre-training models (e.g., BERT) which encodes multiple sentences with a single encoder.", "In this work, we propose a simple and effective unified encoder that can outperform the baseline models of dual-encoder models in terms of BLEU and METEOR scores.", "Moreover, the pre-training models can further boost the performance of our proposed model.", "Thanks to the development of the deep learning methods, the machine translation systems have achieved good performance that is even comparable with human translation in the news domain (Hassan et al., 2018).", "However, there are still some problems with machine translation in the document-level context (Laubli et al., 2018).", "Therefore, more recent work (Jean et al., 2017; Wang et al., 2017; Tiedemann and Scherrer, 2017; Maruf and Haffari, 2018; Bawden et al., 2018; Voita et al., 2019a; Junczys-Dowmunt, 2019) is focusing on the document-level machine translation.", "Most of the existing models (Zhang et al., 2018; Maruf et al., 2019; Werlen et al., 2018) for document-level machine translation use two encoders to model the source sentences and the document-level contexts.", "Figure 1a illustrates the structure of these models.", "They extend the standard 1 In this work, document-level contexts denote the surrounding sentences of the current source sentence.", "Transformer model with a new context encoder, and the encoder for source sentences is conditioned on this context encoder.", "However, they do not fully model the interaction between the contexts and the source sentences because the self-attention layers are performed inside each encoder separately.", "Moreover, it cannot be directly adapted to the recent pre-training models (Devlin et al., 2019; Peters et al., 2018; Radford et al., 2019; Dong et al., 2019; Song et al., 2019; Lample and Conneau, 2019), which encodes multiple sentences with a single encoder.", "Different from the dual-encoder structure, the uni-encoder structure takes the concatenation of contexts and source sentences as the input (as shown in Figure 1b).", "Therefore, when modeling the contexts, it can make full use of the interaction between the source sentences and the contexts, while the dual-encoder model fails to exploit this information.", "Moreover, the uni-encoder structure is identical to the recent pre-training models (e.g., BERT).", "However, the previous uni structure suffers from two problems for document-level machine translation.", "First, the attention is distracted due to longer sequences.", "Second, the source sentences and the contexts are modeled equally, which is contrary to the fact that the translation is more related to the current source sentences.", "To address these problems, we propose a novel flat structure with a unified encoder called Flat-Transformer.", "It separates the encoder of standard Transformers into two parts so that the attention can concentrate at both the global level and the local level.", "At the bottom of the encoder blocks, the self-attention is applied to the whole sequence.", "At the top of the blocks, it is only implemented at the position of the source sentences.", "We evaluate this model on three document-level machine translation datasets.", "Experiments show that it can achieve better performance than the baseline models of dual-encoder structures in terms of BLEU and METEOR scores.", "Moreover, the pre-training models can further boost the performance of the proposed structure.", "Formally, we denote X = { x 1 , x 2 , , x N } as the source document with N sentences, and Y = { y 1 , y 2 , , y M } as the target document with M sentences.", "We assume that N = M because the sentence mismatches can be fixed by merging sentences with sentence alignment algorithms (Sen-nrich and Volk, 2011).", "Therefore, we can assume that ( x i , y i ) is a parallel sentence pair.", "Following Zhang et al. (2018), y <i can be omitted because x <i and y <i conveys the same information.", "As a result, the probability can be approximated as: P ( Y | X ) N (cid:89) i =1 P ( y i | x i ; x <i ; x >i ) (1) where x i is the source sentence aligned to y i , and ( x <i , x >i ) is the document-level context used to translate y i .", "The flat structure adopts a unified encoder that does not distinguish the context sentences and the source sentences.", "Therefore, we introduce the segment embedding to identify these two types of inputs.", "Formally, given the source input of the surrounding context c and the current sentence x , we project them into word embedding and segment embedding.", "Then, we perform a concatenation operation to unify them into a single input: e = [ E ( c ); E ( x )] (2) s = [ S ( c ); S ( x )] (3) where [; ] denotes the concatenation operation, E is the word embedding matrix, and S is the segment embedding matrix.", "Finally, we add e and s as the input of the encoder.", "Given the document context, the input sequences of Flat-Transformer are much longer than the standard Transformer, which brings additional challenges.", "First, the attention is distracted, and its weights become much smaller after the normalization operation.", "Second, the memory consumption and the computation cost increase, so it is difficult to enlarge the model size, which hinders the adaptation to the pre-training model.", "To address this problem, we introduce a unified flat encoder.", "As shown in Figure 2, at the bottom of the encoder blocks, we apply self-attention and the feed-forward layer to the concatenated sequence of the contexts and the current sentence: h 1 = Transformer( e + s ; ) (4) where is the parameter of the Transformer blocks.", "At the top of encoder blocks, each self-attention and feed-forward layer is only implemented on the position of the current sentences: h 2 = Transformer( h 1 [ s : t ]; ) (5) where s and t are the starting and ending positions of the source sentences in the concatenation sequence.", "In this way, the attention can focus more on the current sentences, while the contexts are served as the supplemental semantics for the current sentences.", "It is noted that the total number of the bottom blocks and the top blocks is equal to the number of standard Transformer's blocks, so there is no more parameter than that of the standard Transformer.", "The training of Flat-Transformer is consistent with that of standard Transformer, using the cross entropy loss:", "At the decoding step, it translates the document sentence-by-sentence.", "When translating each sentences, it predicts the target sequence with the highest probability given the current sentence x i and the surrounding contexts x <i , x >i : y i = arg max y i V P ( y i | x i ; x <i ; x >i ) (7) 2.5 Comparison with Existing Models Here, we summarize some significant differences compared with the existing models for document-level machine translation:", "1. Compared with the dual-encoder models, our model uses a unified encoder.", "To combine the representation of two encoders for the decoder, these dual-encoder models should add a layer inside the encoders.", "Flat-Transformer does not put any layer on top of the standard Transformer, so it is consistent with the recent pre-training models.", "2. Compared with the previous uni-encoder models, our model limits the top transformer layers to only model the source sentences.", "In this way, our model has an inductive bias of modeling on more current sentences than the contexts, because the translation is more related to the current sentences.", "3. There are also some alternative approaches to limit the use of context vectors.", "For example, we can limit only the top attention layers to attend to the source sentence while keeping the feed-forward layers the same.", "Compared with this approach, our model does not feed the output vectors of the context encoder to the decoder, so that the decoder attention is not distracted by the contexts.", "The context vectors in our model is only to help encode a better representation for current source sentences.", "We evaluate the proposed model and several state-of-the-art models on three document-level machine translation benchmarks.", "We denote the proposed model as Flat-Transformer .", "Following the previous work (Maruf et al., 2019), we use three English-German datasets as the benchmark datasets, which are TED, News, and Europarl.", "The statistic of these datasets can be found in Table", "1. We obtain the processed datasets from Maruf et al. (2019) 2 , so that our results can be compared with theirs reported in Maruf et al. (2019).", "We use the scripts of Moses toolkit 3 to tokenize the sentences.", "We also split the words into subword units (Sennrich et al., 2016) with 30K merge-operations.", "The evaluation metrics are BLEU (Pap-ineni et al., 2002) and Meteor (Banerjee and Lavie, 2005).", "The batch size is limited to 4 , 000 tokens for all models.", "We set the hidden units of the multi-head component and the feed-forward layer as 512 and 1024 .", "The embedding size is 512 , the number of heads is 4 , and the dropout rate (Srivastava et al., 2014) is 0 .", "3 .", "The number of Transformer blocks 2 https://github.com/sameenmaruf/selective-attn 3 https://github.com/moses-smt/mosesdecoder Model TED News Europarl BLEU METR BLEU METR BLEU METR Dual HAN (Werlen et al., 2018) 24.58 45.48 25.03 44.02 29.58 46.91 SAN (Maruf et al., 2019) 24.62 45.32 24.84 44.27 29.90 47.11 QCN (Yang et al., 2019) 25.19 45.91 22.37 41.88 29.82 47.86 Transformer (Zhang et al., 2018) 24.01 45.30 22.42 42.30 29.93 48.16 +BERT 23.19 45.25 22.06 42.25 30.72 48.62 Uni RNN (Bahdanau et al., 2015) 19.24 40.81 16.51 36.79 26.26 44.14 Transformer (Vaswani et al., 2017) 23.28 44.17 22.78 42.19 28.72 46.22 Our Flat-Transformer 24.87 47.05 23.55 43.97 30.09 48.56 +BERT 26.61 48.53 24.52 45.40 31.99 49.76 Table 2: Results on three document-level machine translation benchmarks (Dual denotes dual-encoder, while Uni means uni-encoder).", "for the top encoder is 5 , while that for the bottom encoder is 1 .", "When fine-tuning on the pre-training BERT, we adopt the base setting, and the hidden size, the feed-forward dimension, and the number of heads are 768, 3072, 12.", "To balance the accuracy and the computation cost, we use one previous sentence and one next sentence as the surrounding contexts.", "We use the Adam (Kingma and Ba, 2014) optimizer to train the models.", "For the hyper-parameters of Adam optimizer, we set two momentum parameters 1 = 0 .", "9 and 2 = 0 .", "98 , and (cid:15) = 1 10 8 .", "The learning rate linearly increases from 0 to 5 10 4 for the first 4 , 000 warming-up steps and then decreases proportional to the inverse square root of the update numbers.", "We also apply label smoothing to the cross-entropy loss, and the smoothing rate is 0 .", "1 .", "We implement the early stopping mechanism with patience that the loss on the validation set does not fall in 10 epochs.", "We compare our models with two categories of baseline models: the dual-encoder models and the uni-encoder models.", "Uni-encoder: RNNSearch (Bahdanau et al., 2015) is an RNN-based sequence-to-sequence model with the attention mechanism.", "Transformer (Vaswani et al., 2017) is a popular model for machine translation, based solely on attention mechanisms.", "For a fair comparison, we use the same hyper-parameters as our model's, which is described in Section 3.2.", "Dual-encoder: Zhang et al. (2018) extends the Transformer model with a new context encoder to represent the contexts.", "HAN (Werlen et al., 2018) is the first to use a hierarchical attention model to capture the context in a structured and dynamic manner.", "SAN (Maruf et al., 2019) proposes a new selective attention model that uses sparse attention to focus on relevant sentences in the document context.", "QCN (Yang et al., 2019) proposes a query-guided capsule networks to cluster context information into different perspectives.", "We compare our Flat-Transformer model with the above baselines.", "Table 2 summarizes the results of these models.", "It shows that our Flat-Transformer can obtain scores of 24.87/23.55/30.09 on three datasets in terms of BLEU, and 47.05/43.97/48.56 in terms of METEOR, which significantly outperforms the previous flat models (RNNSearch and Transformer).", "By fine-tuning on BERT, Flat-Transformer can achieve improvements of +1.74/+0.97/+1.90 BLEU scores as well as +1.48/+1.43/+1.20 METEOR scores.", "It proves that Flat-Transformer can be compatible with the pre-training BERT model.", "Except for the BLEU score on the News dataset, the Flat-Transformer can significantly outperform the dual-encoder models, achieving state-of-the-art performance in terms of both BLEU and METEOR scores.", "On the contrary, the dual-encoder Transformer is not compatible with BERT.", "It gets slightly worse performance on two datasets, mainly because the model size becomes larger to adapt the setting of BERT.", "Still, BERT does not provide a good prior initialization for modeling the uni-directional relationship from contexts to source sentences.", "To analyze the effect of each component of Flat-Transformer, we conduct an ablation study by removing them from our models on the TED dataset.", "Table 3 summarizes the results of the ablation study.", "We remove the segment embedding but reserve the unified structure.", "It concludes that the segment embedding contributes to an improvement of 0.51 BLEU score and 0.85 METEOR score, showing the importance of explicitly identifying the contexts and the source sentences.", "After further removing the unified structure of Flat-Transformer, the model becomes a standard Transformer.", "It shows that the unified structures contribute a gain of 1.08 in terms of BLEU and 2.03 in terms of METEOR.", "The reason is that the unified structures encourage the model to focus more on the source sentences, while the contexts can be regarded as the semantic supplements.", "Here we summarize the recent advances in document-level neural machine translation.", "Some work focuses on improving the architectures of the document machine translation models.", "Tiedemann and Scherrer (2017) and Wang et al. (2017) explore possible solutions to exploit the cross-sentence contexts for neural machine translation.", "Zhang et al. (2018) extends the Transformer model with a new context encoder to represent document-level context.", "Werlen et al. (2018) and (Maruf et al., 2019) propose two different hierarchical attention models to model the contexts.", "Yang et al. (2019) introduces a capsule network to improve these hierarchical structures.", "There are also some works analyzing the contextual errors (Voita et al., 2018, 2019b; Bawden et al., 2018) and providing the test suites (Muller et al., 2018).", "More recently, Voita et al. (2019a) explores the approaches to incorporate the mono-lingual data to augment the document-level bi-lingual dataset.", "Different from these works, this paper mainly discusses the comparison between dual-encoder models and uni-encoder models and proposes a novel method to improve the uni-encoder structure.", "In this work, we explore the solutions to improve the uni-encoder structures for document-level machine translation.", "We propose a Flat-Transformer model with a unified encoder, which is simple and can model the bi-directional relationship between the contexts and the source sentences.", "Besides, our Flat-Transformer is compatible with the pretraining model, yielding a better performance than both the existing uni-encoder models and the dual-encoder models on two datasets.", "The authors would like to thank the anonymous reviewers for their valuable suggestions and comments.", "We appreciate Sameen Maruf providing the same processed document data as in their work." ]
[ "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "other", "other" ]
[ "Spoken language understanding tasks usually rely on pipelines involving complex processing blocks such as voice activity detection, speaker diarization and Automatic speech recognition (ASR).", "We propose a novel framework for predicting utterance level labels directly from speech features, thus removing the dependency on first generating transcripts, and transcription free behavioral coding.", "Our classifier uses a pretrained Speech-2-Vector encoder as bottleneck to generate word-level representations from speech features.", "This pretrained encoder learns to encode speech features for a word using an objective similar to Word2Vec.", "Our proposed approach just uses speech features and word segmentation information for predicting spoken utterance-level target labels.", "We show that our model achieves competitive results to other state-of-the-art approaches which use transcribed text for the task of predicting psychotherapy-relevant behavior codes.", "Speech interfaces have seen a widely growing trend and this has brought about increasing interest in advancing computational approaches to spoken language understanding (SLU).", "(Tur and De Mori, 2011; Xu and Sarikaya, 2014; Yao et al., 2013; Ravuri and Stolcke, 2015).", "SLU systems often rely on Automatic speech recognition (ASR) for generating lexical features.", "The ASR output is then used for the target natural language understanding task.", "Furthermore, end-2-end SLU systems for various applications, including speech synthesis (Oord et al., 2016), ASR tasks (Amodei et al., 2016; Chan et al., 2016; Soltau et al., 2016) and speech-2-text translation (Chung et al., 2019) have shown promising results.", "Recently (Haque et al., 2019) propose a method for learning audio-linguistuc embedding but that too depends on using transcribed text.", "1) error propagation through ASR leading to noisy lexical features", "2) loss of rich information which supplement lexical features, such as prosodic and acoustic expressive speech patterns.", "In this paper, we propose a framework to address the problem of predicting behavior codes directly from speech utterances.", "We focus on data from Motivational Interviewing (MI) sessions, a type of talk-based psychotherapy focused on behavior change.", "In psychology research and clinical practice, behavioral coding is often used to understand process mechanisms and therapy efficacy and outcomes.", "Behavior codes are annotated by an expert at an utterance level (or interaction level) by listening to the session.", "Examples of utterance level behavior codes include if there was a simple of complex reflection by the therapist of their pa-tient's previous utterance(s).", "Several approaches have been proposed for automatic prediction of behavior codes, mainly using lexical features and/or linguistic features such as information from dependency trees (Xiao et al., 2016; Tanana et al., 2016; Perez-Rosas et al., 2017; Cao et al., 2019; Gibson et al., 2019).", "Recent works (Singla et al., 2018; Chen et al., 2019) reveal that using acoustic and X n1 X n2 X nK Bi-directional LSTM layer H n1 H n2 H nK H n1 H n2 H nK LSTM layer X n-10 X n-11 X n-1K C Dense layer Y n-11 Y n-12 Y n-1K X n-11 X n-12 X n-1K Encoder Decoder Figure 2: Speech signal to word encoder (SSWE) which uses sequence-2-sequence framework for generating representations of context words given a word.", "Speech2Vec (Chung and Glass, 2018) has shown that high quality word representations can be learnt by just using speech features.", "It learns word representations in an unsupervised manner using an objective similar to the Skipgram objective of Word2Vec (Mikolov et al., 2013) (a word representation should be representative of its context words) and sequence-to-sequence framework.", "However, Speech2Vec only aims to learn word representations which are averaged spoken-word representations of that word in the corpus.", "Our proposed approach aims to exploit speech signal to word encoder learnt using an architecture similar to Speech2Vec as lower level dynamic word representations for the utterance classifier.", "Thus, our system never actually needs to know what word it is but only word segmentation information.", "We hypothesize that word segmentation information can be obtained with cheaper tools, e.g. a supervised word segmentation system (Tsiartas et al., 2009) or a heuristics based system based on acoustic and prosodic cues (Junqua et al., 1994; Iwano and Hi-rose, 1999).", "We plan to investigate the effect of noise in word boundaries on encoder quality in the future.", "Our end-2-end transcription-free approach is similar and perhaps even motivated some of the previous works.", "There have been some works (Serdyuk et al., 2018; Lugosch et al., 2019) which perform prediction tasks directly from speech signals but lack in capturing the underlying linguistic structure of a language (sentences break into words for semantics).", "We believe capturing some of the important linguistic units (e.g. words) are important for spoken language understanding.", "(Qian et al., 2017) is most similar to our work in terms of overall architecture as they also first get word level representations and then use the encoder for utterance level prediction.", "However (Qian et al., 2017) uses transcribed word transcriptions but we only use word boundaries for ASR-free end-2-end spoken language understanding.", "As shown in Figure 1, most previous works follow the upper pipeline.", "They start with a transcript (manually generated or through an ASR), which is first segmented into utterances.", "They then use word-embeddings for each word in the transcript before feeding it into a classifier to predict target behavior codes.", "Our approach shows competitive results when compared to state-of-the-art models which use transcribed text.", "Our target application domain in this work is psychotherapy.", "While utterance level behavior coding is a valuable resource for psychotherapy process research, it is also a labor intensive task for manual annotation.", "Our proposed method which does not rely on transcripts should help with cheaper and faster behavioral annotation.", "We believe this framework can be a promising direction to directly perform classification tasks given a spoken utterance.", "We first learn a word-level speech signal to word encoder using a sequence-to-sequence framework.", "Speech-2-Vector follows the learning objective similar to Skipgram architecture of Word2Vec.", "We then use the pre-trained encoder to predict behavior codes.", "2.1 Speech signal to word encoder Our Speech signal to word encoder (SSWE) encoder is an adaptation of Speech2Vec (Chung and Glass, 2018) which in turn is motivated by Word2Vec's skipgram architecture.", "The model learns to predict context words given a word.", "But unlike Word2Vec, in SSWE, each word is represented by a sequence of speech frames.", "We adopt the widely known sequence-to-sequence architecture to generate context words given a spoken word.", "Our model generates speech features for context words ( X n 4 , X n 3 , ....., X n +4 ) given speech features for a word X n .", "As input for word X n , it takes K 13 dimensional MFCC features extracted from every 25 ms window of speech audio using a frame rate of 10ms.", "K is the maximum number of frames a spoken word can have.", "This input is then processed through a bidirectional LSTM layer (Hochreiter and Schmidhuber, 1997) to generate the context vector C .", "C is then used by a unidirectional LSTM decoder to generate the speech features for words in context ( Y n 4 , Y n 3 , ....., Y n +4 ).", "We optimize the model by minimizing the mean squared loss between predicted and target outputs: (cid:80) ki =1 (cid:13)(cid:13) X i Y i (cid:13)(cid:13) 2 .", "Following this approach, our system never uses any form of explicit transcriptions for learning the encoder, just only the word boundaries.", "Figure 2 gives a pictorial description of this process.", "Our Speech-2-Vector encoder is trained using a speech corpus and word segmentation information.", "In our setup, we assume we have high quality word segmentation information.", "For the purpose of our experiments, we obtain the word segmentation information using a Forced-aligner (Ochshorn and Hawkins, 2016) (it uses transcripts but we only use it for word segmentation, we plan to replace it with other tool).", "The forced aligner primarily gives boundaries for the start and end of a word, which are then used to get speech features for a word.", "We hypothesize that learning word segmentation is a cheaper task than training a full-blown ASR.", "Figure 3 shows the picturesque view of our utterance classifier.", "Given a word-segmented utterance, we first process speech features for each word to W 2 W 3 W n W 1 word-level Speech-2-Vector encoder Bidirectional LSTM layer Self-attention layer S Dense la yer Prediction (p) Figure 3: Classifier to predict behavior codes which takes input a word segmented speech signal and also uses pretrained Speech-2-Vector encoder to get word level representations.", "get word-level representations ( W i ..... W n ).", "We then learn a function c = f(W) that maps W to a behavioral code c 1 , 2 , ..., C , with C being the number of defined target code types.", "We use a parametric composition model to construct utterance-level embeddings from word-level embeddings.", "Word-level representations ( W i , ....., W n ) are then fed into a bidirectional LSTM layer to contextualize the word embeddings.", "Contextualized word embeddings are then fed to a self-attention layer to get a sentence representation S which is then used to predict the behavior code for an utterance using a dense layer which projects it to C dimensions using a softmax operation.", "We use a self-attention mechanism similar to the one proposed in (Yang et al., 2016) 3 Dataset We experiment with two datasets for training the S2V encoder: first on the LibreSpeech Corpus (Panayotov et al., 2015) (500 hour subset of broadband speech produced by 1,252 speakers) and second, directly on our classifier training data, which we describe below.", "For classification, we use data from Motivational Interviewing sessions (a type of talk based psychotherapy) for addiction treatment presented in (Tanana et al., 2016; Perez-Rosas et al., 2017).", "There are 337 transcribed sessions (approx. 160 hours of audio) coded by experts at the utterance level with behavioral labels following the Motivational Interviewing Skill Code (MISC) manual (Miller et al., 2003).", "Each human coder segmented talk turns into utterances (i.e., complete thoughts) and assigned one code per utterance for all utterances in a session.", "The majority of sessions were coded once by one of three expert coders.", "In this paper, we use the strategy proposed by (Xiao et al., 2016) grouping all counselor codes into 8 categories (described in Table 1).", "We remove backchannels without timestamps which cannot be aligned and split the data into training and testing sets by sessions with roughly 2:1 ratio.", "This split is consistent with all compared works.", "Speech-2-Vector Encoder: We implemented the model with PyTorch (Paszke et al., 2017).", "Similar to (Chung and Glass, 2018), we also adopted the attention mechanism which enables the Decoder to condition every decoding step on the last hidden state of the Encoder (Subramanian et al., 2018).", "The window size was set to 4.", "We train the model using stochastic gradient descent (SGD) with learning rate of 1 e 3 and batch size of 64 (spoken-word, context) pairs.", "We experimented with hyper-parameter combinations for: using bidirectional or unidirectional RNNs, using GRU vs LSTM cell, number of LSTM hidden layers and learning rates.", "We found there was not a big difference in encoder output quality with higher dimensions.", "Therefore, we use a 50 dimensional LSTM cell, thus the resulting encoder output becomes 100 (Bidirectional last hidden states) + 100 (cell state) = 200 dimensions.", "Utterance Classifier: The chosen batch size was 40 utterances.", "The LSTM hidden state dimension is 50.", "We use dropout at the embedding layer with drop probability 0.3.", "The dense layer is of 100 dimensions.", "The model is trained using the Adam optimizer (Kingma and Ba, 2014) with a learning rate of 0.001 and an exponential decay Model Word embeddings Data F1-score Word2Vec Google-wiki 0.53 Word2Vec Indomain 0.56 Speech2Vec LibreSpeech 0.58 Speech2Vec Libre+Indomain* 0.60 Table 2: Using word embeddings learnt using speech features (Speech2vec) vs Word2Vec.", "Similar to prior work, we also weight each sample according to normalized inverse frequency ratio.", "Speech2Vec vs Word2Vec: Table 2 shows results where we compare performance of the system when we use lexically-derived word embeddings (word2Vec) vs speech-features derived word embeddings (Speech2Vec).", "If a word appears in a corpus n times, then speech2vec uses a system similar to our Speech-2-Vector encoder and averages them to get a word embedding for that dictionary word.", "Results confirm two main observations:", "1) It is better to learn/fine-tune the word embeddings on an in-domain dataset.", "2) Speech2Vec that learns word embeddings based on different spoken variations of word provides better results for behavior code prediction.", "This result is consistent with findings from (Singla et al., 2018; Chen et al., 2019) where it is shown that acoustic-prosodic information can provide complementary information for predicting behavior codes and hence, produce better results.", "One challenge is that SSWE and Speech2Vec generally needs large amount of transcribed data to learn high quality word embeddings.", "Therefore, we first train SSWE on a general speech corpus (here, LibreSpeech (Libre)) before fine-tuning it on our classifier training data (results with show this experiment).", "Transcriptions vs. No Transcriptions: Methods discussed above still rely on transcriptions to know what the word is.", "However, our proposed method does not use any explicit transcription but only the word segmentation information.", "Results in Table 3 show that using a pre-trained Speech-2-Vector encoder as a building block to get word representations can lead to competitive results to other methods which rely heavily on first generating transcripts of the spoken utterance.", "Here Model Pretrain data F1-score Majority class -0.33 Single-modality Word2Vec Indomain 0.56 Prosodic Indomain 0.42 Multimodal Word2Vec+Prosodic Indomain 0.58 Speech2Vec Libre+Indomain* 0.60 Speech-only (Our approach) SSWE Indomain 0.49 SSWE Indomain 0.44 SSWE Libre+Indomain* 0.56 SSWE Libre+Indomain* 0.50 Table 3: We compare our proposed approach to previous approaches.", "we also compare our model to the multimodal approach proposed by (Singla et al., 2018; Chen et al., 2019) where they use word-level prosodic features along with lexical word embeddings.", "Prosodic and Word2Vec+Prosodic show results for this system.", "Table 3 also shows that doing end-2-end training (results with *) where our Speech-2-Vector encoder is also updated by the classifier loss generates poor results.", "We hypothesize that it can be due to the fact that our behavior code prediction data was split to minimize the speaker overlap.", "Thus it becomes easier to overfit when we fine-tune it on some speaker-related properties instead of generalizing for behaviour code prediction task.", "We show that comparable results can be achieved for behavior code prediction by just using speech features and without any ASR or human transcriptions.", "Our approach still depends on word segmentation information, however, we believe obtaining word segmentation from speech is comparatively easier than building a high quality ASR.", "The evaluation results show the application significance of an end-2-end speech to behavioral coding for psychotherapy conversations.", "This allows for building systems that do not include explicit transcriptions, an attractive option for privacy reasons, when the end goal (as determined by the behavioral codes) is to characterize the overall quality of the clinical encounter for training or quality assurance.", "The results still vary and are worse compared to using human annotations.", "We plan to do a detailed analysis along two lines:", "1) Comparing if the proposed modeling technique can help bridge gap between predicted and human annotations, and", "2) Effect of environment variables e.g., background noise, speaker features, different languages etc.", "We believe our approach can benefit from some straightforward modifications to the architecture, such as using convolutional neural networks which have shown to perform better at handling time-continuous data like speech.", "This work was supported by the NIH." ]
[ "abstain", "objective", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "Paralinguistics, the non-lexical components of speech, play a crucial role in human-human interaction.", "Models designed to recognize paralinguistic information, particularly speech emotion and style, are difficult to train because of the limited labeled datasets available.", "In this work, we present a new framework that enables a neural network to learn to extract paralinguistic attributes from speech using data that are not annotated for emotion.", "We assess the utility of the learned embeddings on the downstream tasks of emotion recognition and speaking style detection, demonstrating significant improvements over surface acoustic features as well as over embeddings extracted from other unsupervised approaches.", "Our work enables future systems to leverage the learned embedding extractor as a separate component capable of highlighting the paralinguistic components of speech.", "An effective speech-based AI system is capable of not only recognizing and interpreting the linguistic content of speech but also recognizing and interpreting its paralinguistic attributes.", "While the linguistic elements of speech encode what was said (i.e., the content), the paralinguistic elements encode how it was said (i.e., emotion, style, etc.) (Schuller and Batliner, 2013).", "The detection and modeling of paralinguistic attributes have many potential applications; ranging from affect-aware Human-Computer Interaction (HCI) systems (Vin-ciarelli et al., 2015) to the management of mental health (Karam et al., 2014; Cummins et al., 2015).", "One major challenge with building robust paralinguistic models is the limited access to large-scale, labeled datasets that are needed for training the machine learning models.", "For instance, a typical emotion dataset (e.g., IEMOCAP) that is used for building paralinguistic models contains around 12 hours of speech while a modern dataset used for building speaker recognition models contains around 2000 hours of speech (Nagrani et al., 2017).", "It is therefore critical that features used in paralinguistic tasks distill relevant information from the original signal to allow the recognizers to effectively detect the target attributes.", "With this in mind, new methods that can leverage unlabeled data for distilling paralinguistic information from speech should be explored.", "In this work, we introduce the Expressive Voice Conversion Autoencoder (EVoCA), an unsupervised framework that distills paralinguistic attributes from speech without relying on explicit emotion or style labels.", "EVoCA is designed to enable a neural network to learn what it means for speech to be expressive by treating expressive speech as a modulation of neutral speech.", "EVoCA is trained using parallel speech inputs: one expressive and one neutral.", "However, these types of parallel paralinguistic corpora are not available at scale.", "To address this, we use a large audiobook corpus (i.e., 200 hours) composed of expressive speech and artificially generate the parallel neutral, nonexpressive speech using the available transcriptions (see Figure 1).", "We train the EVoCA model to convert between non-expressive synthetic speech and expressive real speech, demonstrating how this conversion yields an embedding that captures paralinguistic attributes (see Figure 2).", "The benefit of the EVoCA framework is that once trained, the component responsible for producing the paralinguistic embeddings can be used as a front-end speech transformer for a variety of downstream applications.", "We show that these learned paralinguistic embeddings can be used in downstream emotion recognition and speaking style classification tasks.", "We show that the EVoCA framework learns embeddings that outperform those obtained using other unsupervised and self-supervised speech feature learning methods from the literature.", "To the best of our knowledge, ours is the first work to demonstrate how one can learn paralinguistic features by training a neural model to convert between non-expressive synthetic speech and expressive real speech.", "Speech emotion recognition applications rely on an extensive set of acoustic features that have evolved over the years (Schuller et al., 2009, 2010, 2011, 2013; Eyben et al., 2015).", "Spectral features are a crucial component of any emotion feature set and are included in the widely used ComParE and eGeMAPs feature sets (Schuller et al., 2013; Eyben et al., 2015).", "Common surface features that are derived from the speech spectrum include Mel-frequency cepstral coefficients (MFCCs) and Mel-filterbanks (MFBs).", "In this work, we propose a framework for learning an MFB transformation that highlights the paralinguistic content of an utterance; we demonstrate the effectiveness of the learned transformation over surface MFB features on emotion and speaking style classification tasks.", "Our work also explores the utility of using both synthetic and real speech to learn paralinguistic information.", "Lotfian and Busso have previously demonstrated how speech synthesizers can be used to remove emotion from a speech utterance to provide trained emotion recognizers with a neutral reference to aid in the recognition of expressive speech (Lotfian and Busso, 2015).", "One limitation with their approach, however, is that it relied on having access to a real-time speech synthesizer to generate a neutral version of the input utterance for use by the emotion recognizer.", "In contrast, we use the speech synthesizer only during the data preparation process (Figure 1) and not during test time.", "Our approach is related to works that focused on unsupervised and self-supervised speech representation learning.", "Chung et al. introduced two auto-regressive methods for learning MFB transformations for speech applications without relying on explicit labels (Chung et al., 2019).", "Both of the proposed models were trained to predict future frames of the input speech sequence in order to learn global structures represented in the speech signal.", "They showed that the resulting transformation improved performance over surface features on speaker verification and phone recognition tasks.", "Hsu et al. devised a variational autoencoder that is capable of learning hierarchical information present in speech data (Hsu et al., 2017).", "Their approach disentangled frame-level features from utterance-level features in order to provide robust embeddings for both speaker recognition and automatic speech recognition tasks.", "Although several unsupervised learning strategies exist for learning speech transformations, ours is the only approach that is targeted at learning transformations that highlight expressive characteristics in speech.", "Recent works in voice conversion have also inspired our proposed approach.", "The goal of voice conversion is to convert an utterance from one speaker so that it sounds as if it was spoken by another speaker (Mohammadi and Kain, 2017).", "In other words, a voice converter retains all linguistic content and only modulates the paralinguistics of speech.", "Previous works demonstrated that voice conversion techniques can be used to convert between emotional states (Gao et al., 2019; Shankar et al., 2019a,b).", "In this work we primarily focus on the use of parallel voice conversion methods and future work will explore the trade-offs between parallel and non-parallel approaches.", "However, to the best of our knowledge, our work is the first to show that the voice conversion task can be adapted ( ) , Input utterance (real-expressive) Paralinguistic Encoder Voice Converter Paralinguistic Embedding Output utterance (converted-expressive) Input utterance (synthetic-neutral) Figure 2: An overview of the proposed Expressive Voice Conversion Autoencoder (EVoCA).", "and incorporated into a framework that enables a neural network to learn compact embeddings that capture speech expressiveness.", "A sketch of our data generation setup is shown in Figure", "1. Given an audiobook corpus, where both speech and text modalities are available, we use the text to create synthetic speech samples using a speech synthesizer.", "The created synthetic speech should lack expressiveness.", "This provides our system with the opportunity to learn how to characterize expressiveness and imbue the non-expressive speech with expressive characteristics.", "We use the open-source Festival toolkit 1 , as previous research has demonstrated its utility for generating neutral, non-expressive speech (Lotfian and Busso, 2017).", "Once the speech synthesis process finishes, our data now contain pairs of real (expressive) speech and synthetic (neutral non-expressive) speech.", "Our EVoCA model then leverages the resulting parallel data to learn an embedding transformation that facilitates the conversion from synthetic to real speech without relying on any manual emotion or style labels.", "A sketch of EVoCA is shown in Figure", "2. The EVoCA model converts neutral speech to expressive speech.", "In the process, the paralinguistic encoder learns a compact embedding that encodes paralinguistic elements, including expressiveness.", "The paralinguistic embedding and the paired synthetic speech sample are fed into the voice converter, which produces expressive speech.", "A recon-struction loss ( L 2 ) between the generated expressive speech and the original expressive speech is computed and used to train the style autoencoder in an end-to-end fashion.", "Once trained, the paralinguistic encoder can be used as a speech transformer to create features that highlight the expressive components of input speech.", "We use four datasets in this work: Blizzard2013, IEMOCAP, MSP-IMPROV, and VESUS.", "Blizzard2013 is used to train the EVoCA model while the other three datasets are used to test the effectiveness of the learned embeddings on speech emotion recognition and speaking style detection.", "Blizzard2013.", "The Blizzard2013 dataset contains around 200 hours from 55 American English audiobooks read by Catherine Byers.", "Although other audiobook-based datasets are publicly available, we choose the Blizzard2013 corpus due to its highly expressive and animated nature.", "This corpus was used in previous research to model style and prosody in speech synthesis applications (Wang et al., 2018; Zhang et al., 2019c).", "We use a segmented version of the corpus, which we obtained from the 2013 Blizzard Challenge website.", "2 IEMOCAP.", "The IEMOCAP dataset was created to explore emotion expression in dyadic interactions (Busso et al., 2008).", "Pairs of actors, one male and one female, were recorded while interacting in scripted and improvised roles that were designed to elicit emotional expressions.", "The dataset contains 12 hours of speech from 10 individuals.", "The recordings from each interaction were manually segmented into utterances based on speaker turns in the dialogue.", "The resulting utterances were manually labeled by five annotators for both categorical and continuous emotion labels.", "We only consider utterances that had majority agreement among the annotators and focus on four basic categorical emotions: happy (merged with excited ), angry , neutral , and sad .", "In addition to emotion labels, the IEMOCAP dataset provides spontaneity labels (acted vs. spontaneous), which we use in our speaking style detection experiments.", "MSP-IMPROV.", "The MSP-IMPROV dataset was created to capture naturalistic expressions from improvised scenarios while partially controlling for variations in the lexical modality (Busso et al., 2016).", "Similar to IEMOCAP, pairs of actors, one male and one female, were recorded while interacting in improvised scenarios, which included pre-specified target sentences that actors were asked to incorporate into their dialogue.", "The dataset is nine hours in duration from 12 speakers.", "The resulting utterances were manually labeled for emotion using crowd-sourced annotators.", "We only consider utterances whose labels had a majority agreement among the annotators and focus on four basic emotion labels: happy , angry , neutral , and sad .", "VESUS.", "The VESUS dataset provides around seven hours of lexically-controlled emotional data (Sager et al., 2019).", "In contrast to IEMOCAP and MSP-IMPROV where emotion elicitation and expression happen in improvised scenarios, actors in the VESUS dataset were asked to read the same set of 250 semantically neutral phrases in five different emotions: happy , angry , neutral , sad , and fearful .", "The dataset contains around seven hours of speech from 10 speakers, five males and 2 http://www.cstr.ed.ac.uk/projects/ blizzard/ five females.", "The resulting utterances were labeled for emotional content by 10 crowd-sourced annotators.", "In our experiments, we focus on utterances that achieved at least 50% agreement among the crowd-sourced annotators with respect to the ac-tor's intended emotion.", "We first pre-process speech samples from all datasets such that they have a sampling rate of 16 kHz and then extract 80 -dimensional MFB features using the Librosa toolkit (McFee et al., 2015) with a 50 ms Hanning window and a step size of 12 .", "5 ms, consistent with previous research in voice conversion (Zhang et al., 2019a).", "We z -normalize the frequency bins per utterance for the voice converter and mean-normalize the bins per-utterance for the paralinguistic encoder; consistent with normalization methods used in previous works (Snyder et al., 2018; Zhang et al., 2019c).", "Normalization ensures that the features are robust to variations that could arise from having different recording conditions (Benesty et al., 2007).", "Voice conversion is a regression task where the goal is to output the MFB features of an expressive speech utterance given the MFB features of the synthesized speech utterance.", "Emotion recognition is posed as a multi-class classification task where the goal is to recognize the target emotion.", "Lastly, speaking style detection is posed as a binary classification task where the goal is to recognize whether the target data are acted or spontaneous.", "We use Mel-cepstral distortion (MCD) and root mean square error (RMSE) of F 0 for evaluating the quality of the converted speech (Zhang et al., 2019a) when training the end-to-end model.", "MCD and F 0 RMSE cannot be directly extracted from the MFB acoustic features used by our conversion model.", "Thus, we use Librosa to invert the MFB features to audio by first approximating the Short-time Fourier transform (STFT) magnitude and then using the Griffin-Lim algorithm to reconstruct the phase.", "We extract the F 0 and 24 -dimensional mel cepstral coefficients from the waveform using the WORLD vocoder (Morise et al., 2016) following (Zhang et al., 2019a,c).", "emotion recognition and speaking style detection tasks, respectively.", "The UAR metric is used to account for the class imbalance that is inherent in the emotion data (Rosenberg, 2012).", "We design our experiments to address the following four questions regarding the proposed framework shown in Figure 2:", "1. Is the proposed framework capable of inserting expressiveness into synthetic speech?", "2. Can the learned paralinguistic embeddings be used for emotion and style classification?", "3. How do changes to the structure of the proposed framework affect both the quality of the converted speech and the effectiveness of the extracted embeddings for emotion and speaking style detection tasks?", "4. How does the performance of paralinguistic embeddings in emotion and speaking style detection tasks compare to those of feature transformations learned using other unsupervised methods?", "The proposed EVoCA consists of two components: the voice converter and the paralinguistic encoder.", "The voice converter consists of a stack of four Bidirectional Long Short-Term Memory (BLSTM) layers, each with a hidden size of 256 , followed by a 1 D convolution layer with 80 channels and a kernel size of one.", "The paralinguistic encoder we use consists of a stack of two BLSTM layers, each with a hidden size of 256 .", "The fixed-size embeddings from the paralinguistic encoder are induced by taking the mean of the hidden representations from the last BLSTM layer and then passing the outputs through a linear layer, which reduces the size by half.", "The reasoning for this linear layer is to counteract the bidirectional property of BLSTM which outputs hidden representations that are twice the size of the hidden layer.", "Our voice converter is inspired by the one used in (Zhang et al., 2019b).", "However, in this work we utilize a basic version of the model that does not include a two-layer fully connected PreNet, a five-layer 1D convolution Post-Net, nor an attention module.", "We opt to use a simple implementation for voice conversion since our problem does not follow the sequence-to-sequence learning paradigm as our input features are pre-aligned using dynamic time warping (DTW) (Mo-hammadi and Kain, 2017).", "Our final style autoencoder model has approximately 2 .", "2 million parameters.", "We investigate how changes to the structure of the proposed EVoCA affect not only the quality of the converted speech, but also the quality of the extracted embeddings.", "We study the impact that the paralinguistic embedding and synthetic speech have on the voice converter by comparing the voice conversion performance when only one component is present.", "We also investigate the effect of reducing the capacity (i.e., the number of hidden units) of the paralinguistic encoder and the voice converter on the converted speech as well as on the extracted embeddings for downstream classification tasks.", "Specifically, we keep the voice converter fixed and reduce the hidden size of the BLSTM paralinguistic encoder gradually from 256 units to 32 units (reducing the number of parameters from 2 . 2 million to 1 . 5 million), noting performance changes on the two tasks.", "Then, we keep the paralinguistic encoder fixed and reduce the hidden size of the BLSTM voice converter from 256 units to 32 units (reducing the number of parameters from 2 . 2 million to 0 . 7 million), again noting performance changes on the two tasks.", "Note that these hyper-parameters are not and should not be tuned based on the performance of the downstream task as the goal of this experiment is to analyze how these parameters affect the qualities of the transformed features and the converted speech.", "We split the Blizzard2013 data into training, validation, and test partitions following a random 90% 5% 5% split rule.", "We train our style autoencoder on the training partition and use the validation partition for loss monitoring and early stopping.", "Conversion performance is reported on the test partition of the data.", "We construct the network in PyTorch and train it from scratch with batches of size 128 using the ADAM optimizer for a total of 80 epochs.", "We use an initial learning rate of 10 4 and decrease it exponentially using a decay factor of 0 .", "95 after each epoch starting from epoch 30 .", "We monitor the validation loss after each epoch and perform early stopping if the validation loss does not improve for 15 consecutive epochs.", "The first unsupervised baseline that we consider is a convolutional autoencoder that is applied to fixed-length MFB segments of 128 frames.", "The autoencoder is similar to the one used in (Eskimez et al., 2018).", "The encoder consists of three 2 D convolution layers, of shape: [32 9 9] , [64 7 7] , and [128 5 5] , followed by a linear layer with 256 units.", "A [2 2] max pooling operation is applied after each layer to reduce the dimensionality of the input by two.", "The decoder consists of a linear layer with 256 units followed by four 2 D convolution layers of shape: [128 5 5] , [64 7 7] , [32 9 9] , and [1 1 1] .", "A [2 2] nearest neighbor up-sampling operation is applied after each layer to get back the original size of the input.", "Both the encoder and the decoder use Leaky ReLU activation units and the autoencoder has approximately 3 .", "9 million parameters.", "The second unsupervised baseline that we consider is the Autoregressive Predictive Coding (APC) model that was introduced in (Chung et al., 2019).", "Given an input of MFB features, the APC model is trained to predict the features n time-steps in the future.", "The APC model that we use is similar to the one used by Chung et al. and it consists of three LSTM layers, each with a width of 512 .", "We run our experiments with three values for n : 5 , 10 , and 20 .", "Once trained, the outputs from the last LSTM layer are averaged to obtain fixed-size features for downstream tasks.", "The APC model that we use has approximately 5 .", "5 million parameters.", "We train both the autoencoder and the APC baselines on the Blizzard2013 dataset.", "We use the same protocol we use for training EVoCA when training the autoencoder baseline.", "However, we train the APC baselines for 100 epochs following the authors' recommendation.", "We test the utility of the learned paralinguistic encoder for transforming MFB features to highlight their paralinguistic attributes in emotion recognition and speaking style detection tasks.", "First, we assess if transforming MFB features provides any advantage over using surface MFB features on the two tasks.", "Then, we compare the learned feature transformation to those obtained using the unsupervised and supervised baselines.", "We follow a leave-one-speaker-out evaluation scheme and report the average performance across all test speakers on all four downstream tasks.", "For each test speaker, we pick the model that gives the best performance on a held-out validation set.", "The hyper-parameters that we optimize on the validation set include the number of hidden layers { 1 , 2 , 3 } , the width of each hidden layer { 64 , 128 , 256 } , and the activation unit { Tanh, ReLU } .", "We construct the networks in PyTorch and train them with batches of size 32 using the ADAM optimizer with learning rate of 10 4 and a cross-entropy loss function.", "We train each model for a maximum of 100 epochs and apply early stopping if the validation loss does not improve for five consecutive epochs.", "We repeat each experiment with 30 different random seeds and report the average and standard deviation to account for performance fluctuation due to random initialization and training.", "F 0 RMSE of 146 .", "20 when computing the performance using the synthetic reference speech and ground-truth expressive speech.", "In comparison, we obtain an MCD of 10 .", "71 and an F 0 RMSE of 64 .", "36 when computing the performance using the converted speech and the ground-truth expressive speech.", "This suggests that the proposed EVoCA framework converts the synthetic speech so that its closer to the expressive speech.", "We note that it is possible to obtain better conversion performance if we increase the capacity of the model and utilize a more sophisticated vocoder.", "However, as the results for question 3 will suggest, increasing the capacity of the voice converter might not necessarily yield better embeddings for downstream classification tasks.", "Can the learned paralinguistic embeddings be used for emotion and style classification?", "Table 2 shows that our paralinguistic embeddings significantly outperform MFB surface features on both the emotion recognition and the speaking style detection tasks.", "This suggests that the paralinguistic encoder learns a feature transformation that highlights latent paralinguistic attributes in surface acoustic features.", "How do changes to EVoCA's structure affect the converted speech quality as well as the quality of the extracted embeddings for downstream tasks?", "Figure 3 visually demonstrates the effect of each input on the quality of a converted utterance.", "Figure 3a shows that the converted speech has higher quality when the paralinguistic embedding is provided as an input compared to Figure 3b.", "Specifically, the harmonic structure in Figure 3a is well defined and dynamic while that in Figure 3b is relatively static and not well separated.", "Figure 3c shows that the model is unable to generate speech solely from paralinguistic embeddings.", "We hypothesize that this is due to the embeddings' limited capacity to encode both linguistic and paralinguistic information present in the original signal to allow for accurate reconstruction.", "Additionally, we believe paralinguistic embeddings struggle to model time-varying phenomena like rhythm and speech activity because they are computed using a global average over LSTM outputs.", "Table 1 quantitatively shows the effect of each of these two inputs on the conversion performance.", "We find that the synthesized reference input is more important to the conversion task than the paralinguistic embedding is.", "This is highlighted by the larger impact that reducing the capacity of the voice converter has on the converted speech quality compared to the impact of reducing the capacity of the of the paralinguistic encoder.", "This can be due to the fact that the paralinguistic embeddings do not have enough capacity to encode the linguistic attributes in speech that are necessary for obtaining good voice conversion performance.", "Tables 1 and 2 show the results obtained on the voice conversion task and the downstream classification tasks, respectively.", "We find that while a high capacity voice converter improves the quality of the converted speech, it can also degrade the quality of the extracted embeddings as measured on the classification tasks.", "For instance, we find that reducing the capacity of the voice converter from 256 to 128 decreases the conversion performance on the voice conversion task but improves the classification performance on two out of the four downstream tasks.", "The results suggest that using a high-capacity voice converter can reduce EVoCA's reliance on the paralinguistic encoder for providing style and emotion information, causing the encoder to perform poorly when used to transform features for downstream applications.", "How does the performance of paralinguistic embeddings compare to the embeddings learned from other unsupervised methods?", "Table 2 shows that paralinguistic embeddings encode information that is more suited to paralinguistic tasks than those extracted from other unsupervised methods, namely APC and a traditional autoencoder.", "The APC model provides improvements over surface features on all four downstream tasks when using the 20-step setup and shows improvements over surface features on three downstream tasks when using the 10-step setup.", "In contrast, a standard autoencoder fails to provide any improvements over surface features on all tasks.", "We believe that the success of the extracted embeddings from EVoCA demonstrate the importance of targeted unsupervised tasks.", "We proposed EVoCA, a framework for learning a surface feature transformation that highlights paralinguistic content needed for detecting emotion and speaking style.", "We first showed that speech synthesizers can be used to strip away paralinguistic attributes from speech while retaining linguistic information.", "We demonstrated how a neural voice conversion model can be adapted to facilitate the extraction of paralinguistic features by converting synthetic neutral speech to real expressive speech.", "Finally, we showed that these extracted embeddings improve performance over surface features and can outperform other embeddings extracted from existing unsupervised methods on emotion recognition and speaking style detection tasks.", "Future work will consider how the choice of the synthesis model, the number of speakers in the training set, and the architecture used for the encoder affect the quality of the extracted embeddings.", "Potential Benefits.", "A variety of applications can benefit from the automatic detection of paralinguistic attributes (e.g., emotion) from speech; some of these applications include: human-robot interaction, medical applications, and speaker verification to name a few.", "The framework that we introduce can impact these applications by enabling the utilization of data that are not labeled for paralinguistic attributes when building the detection models for these domains.", "Potential Risks.", "The behavior and performance of all data-driven models heavily depend on the data that are used for building them.", "Thus, the decisions that these models make will reflect any biases that exist in the data.", "Some attributes that can bias speech data include: age, gender, dialect, accent, language, recording conditions, and environment.", "We encourage the deployment of our framework with full consideration of these biases and their consequences on the target application." ]
[ "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "result", "result", "objective", "other", "other", "other", "objective", "objective", "other", "other", "objective", "method", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "objective", "other", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method" ]
[ "Recent work on aspect-level sentiment classification has demonstrated the efficacy of incorporating syntactic structures such as dependency trees with graph neural networks (GNN), but these approaches are usually vulnerable to parsing errors.", "To better leverage syntactic information in the face of unavoidable errors, we propose a simple yet effective graph ensemble technique, GraphMerge, to make use of the predictions from different parsers.", "Instead of assigning one set of model parameters to each dependency tree, we first combine the dependency relations from different parses before applying GNNs over the resulting graph.", "This allows GNN models to be robust to parse errors at no additional computational cost, and helps avoid overparameterization and overfitting from GNN layer stacking by introducing more connectivity into the ensemble graph.", "Our experiments on the SemEval 2014 Task 4 and ACL 14 Twitter datasets show that our GraphMerge model not only outperforms models with single dependency tree, but also beats other ensemble models without adding model parameters.", "Aspect-level sentiment classification is a fine-grained sentiment analysis task, which aims to identify the sentiment polarity (e.g., positive, negative or neutral) of a specific aspect term in a sentence.", "For example, in The exterior, unlike the food, is unwelcoming. , the polarities of aspect terms exte-rior and food are negative and positive, respectively.", "This task has many applications, such as assisting customers to filter online reviews or make purchase decisions on e-commerce websites.", "Recent studies have shown that syntactic information such as dependency trees is very effective in capturing long-range syntactic relations that are obscure from the surface form (Zhang et al., 2018).", "Several successful approaches employed food The exterior unlike the , is unwelcoming , .", "graph neural network (GNN) (Kipf and Welling, 2016) model over dependency trees to aspect-level sentiment classification (Huang and Carley, 2019; Zhang et al., 2019; Sun et al., 2019; Wang et al., 2020b), which demonstrate that syntactic information is helpful for associating the aspect term with relevant opinion words more directly for increased robustness in sentiment classification.", "However, existing approaches are vulnerable to parsing errors (Wang et al., 2020b).", "For example, in Figure 1, the blue parse above the sentence can mislead models to predict negative sentiment for the aspect term food with its direct association to unwelcoming.", "Despite their high edge-wise parsing performance on standard benchmarks, state-of-the-art dependency parsers usually struggle to predict flawless parse trees especially in out-of-domain settings.", "This poses great challenge to dependency-based methods that rely on these parse treesthe added benefit from syntactic structure does not always prevail the noise introduced by model-predicted parses (He et al., 2017; Sachan et al., 2021).", "In this paper, we propose GraphMerge, a graph ensemble technique to help dependency-based models mitigate the effect of parsing errors.", "Our technique is based on the observation that different parsers, especially ones with different inductive biases, often err in different ways.", "For instance, in Figure 1, the green parse under the sentence is incorrect around unlike the food, but it nevertheless correctly associates unwelcoming with the other aspect term exterior, and therefore is less likely to mislead model predictions.", "Given dependency trees from multiple parses, instead of assigning each dependency tree a separate set of model parameters and ensembling model predictions or dependency-based representations of the same input, we propose to combine the different dependency trees before applying representation learners such as GNNs.", "Specifically, we take the union of the edges in all dependency trees from different parsers to construct an ensemble graph, before applying GNNs over it.", "This exposes the GNN model to various graph hypotheses at once, and allows the model to learn to favor edges that contribute more to the task.", "To retain the syntactic dependency information between words in the original dependency trees, we also define two different edge types parent-to-children and children-to-parentwhich are encoded by applying relational graph attention networks (RGAT) (Busbridge et al., 2019) on the ensemble graph.", "Our approach has several advantages.", "Firstly, since GraphMerge combines dependency trees from different parsers, the GNN models can be exposed to multiple parsing hypotheses and learn to choose edges that are more suitable for the task from data.", "As a result, the model is less reliant on any specific parser and more robust to parsing errors.", "Secondly, this improved robustness to parsing errors does not require any additional computational cost, since we are still applying GNNs to a single graph with the same number of nodes.", "Last but not least, GraphMerge helps prevent GNNs from overfitting by limiting over-parameterization.", "Aside from keeping the GNN computation over a single graph to avoid separate parameterization for each parse tree, GraphMerge also introduces more edges in the graph when parses differ, which reduces the diameter of graphs.", "As a result, fewer layers of GNNs are needed to learn good representations from the graph, alleviating the over-smoothing problem (Li et al., 2018b).", "ensemble graph enables the model to learn from noisy graph and select correct edges among nodes at no additional computational cost.", "We retain the syntactic dependency information in the original trees by parameterizing parent-to-children and children-to-parent edges separately, which improves the performance of the RGAT model on the ensemble graph.", "Our GraphMerge RGAT model outperforms recent state-of-the-art work on three benchmark datasets (Laptop and Restaurant reviews from SemEval 2014 and the ACL 14 Twitter dataset).", "It also outperforms its single-parse counterparts as well as other ensemble techniques.", "Much recent work on aspect-level sentiment classification has focused on applying attention mechanisms (e.g., co-attention, self attention, and hierarchical attention) to sequence models such recurrent neural networks (RNNs) (Tang et al., 2015, 2016; Liu and Zhang, 2017; Wang et al., 2018; Fan et al., 2018; Chen et al., 2017; Zheng and Xia, 2018; Wang and Lu, 2018; Li et al., 2018a,c).", "In a similar vein, pretrained transformer language models such as BERT (Devlin et al., 2018) have also been applied to this task, which operates directly on word sequences (Song et al., 2019; Xu et al., 2019; Rietzler et al., 2019).", "In parallel, researchers have also found syntactic information to be helpful for this task, and incorporated it into aspect-level sentiment classification models in the form of dependency trees (Dong et al., 2014; He et al., 2018) as well as constituency trees (Nguyen and Shirai, 2015).", "More recently, researchers have developed robust dependency-based models with the help of GNNs that operate either directly on dependency trees (Huang and Carley, 2019; Zhang et al., 2019; Sun et al., 2019), as well as reshaped dependency trees that center around aspect terms (Wang et al., 2020b).", "While most recent work stack GNNs on top of BERT models, Tang et al. (2020) have also reported gains by jointly learning the two with a mutual biaffine attention mechanism.", "Despite the success of these dependency-based models, they are usually vulnerable to parse errors since they rely on a single parser.", "Tu et al. (2012) used a dependency forest to combine multiple dependency trees, however they tackled the sentence-level sentiment analysis task instead, and Sentence Aspect Term BERT Graph Ensemble RGAT Classification Pooling b c a b c d e Concat Parser1 Parser2 Parser3 Edge Union a b c d c a b b c d a e d e e c d a b e b c a b c d e Sentence pos neu neg Figure 2: The framework of the GraphMerge model for aspect-level sentiment classification over multiple dependency trees.", "their proposed ensemble technique is also signifi-cantly different from ours.", "Furthermore, most prior work that leverage GNNs to encode dependency information treats the dependency tree as an undirected graph, therefore ignores the syntactic relation between words in the sentence.", "We are interested in the problem of predicting the sentiment polarity of an aspect term in a given sentence.", "Specifically, given a sentence of n words { w 1 , w 2 , . . . , w , . . . , w + t , . . . , w n } where { w , w +1 , . . . , w + t 1 } is the aspect term, the goal is to classify the sentiment polarity toward the term as positive, negative, or neutral.", "Applying GNNs over dependency trees is shown effective to solve this problem, however it is vulnerable to parsing errors.", "Therefore, we propose a GraphMerge technique to utilize multiple dependency trees to improve robustness to parsing errors.", "In this section, we will first introduce GraphMerge, our proposed graph ensemble technique, then introduce the GNN model over GraphMerge graph for aspect-level sentiment analysis.", "To allow graph neural networks to learn dependency-based representations of words while being robust to parse errors that might occur, we introduce GraphMerge, which combines different parses into a single ensemble graph.", "Specifically, given a sentence { w 1 , w 2 , . . . , w n } and M different dependency parses G 1 , . . . , GM , GraphMerge takes the union of the edges from all parses, and constructs a single graph G as follows G = ( V, { e | e = ( w i , w j ) M (cid:91) m =1 E m } ) (1) where V is the shared set of nodes among all graphs 1 and E m (1 m M ) is the set of edges in G m (see the right side of Figure 2 for an exam-ple).", "As a result, G contains all of the (directed) edges from all dependency trees, on top of which we can apply the same GNN models when a single dependency tree is used.", "Therefore, GraphMerge introduces virtually no computational overhead to existing GNN approaches, compared to traditional ensemble approaches where computational time and/or parameter count scale linearly in M .", "Note that the parsing time is not accounted for computational cost, because the dependency tree from three parsers could be obtained in parallel thus the running time is the same as the single parser.", "What is more, the resulting graph G likely contains more edges from the gold parse which correctly captures the syntactic relation between words in the sentence, allowing the GNN to be robust to parse errors from any specific parser.", "Finally, since G contains more edges between words when parses 1 This is true for dependency trees as long as parsers share the same tokenization as input.", "differ than any single parse and reduces the diameter of the graph, it is also more likely that a shallower GNN model is enough to learn good representations, therefore avoiding over-parameterization and thus overfitting from stacking more GNN layers.", "To learn node representations from ensemble graphs, we apply graph attention networks (GAT; Velickovic et al., 2017).", "In one layer of GAT, the hidden representation of each node in the graph is computed by attending over its neighbors, with a multi-head self-attention mechanism.", "The representation for word i at the l -th layer of GAT can be obtained as follows h ( l ) i = (cid:107) Kk =1 ( (cid:88) j N i kij W k h ( l 1 ,k ) i ) (2) Where K is the number of attention heads, N i is the neighborhood of node i in the graph, and (cid:107) the concatenation operation.", "W k R d B d h represents the learnable weights in GAT and denotes ReLU activation function.", "kij is the attention score between node i and node j with head k .", "Edge Types.", "To apply GAT to ensemble graphs, we first add reciprocal edges for each edge in the dependency tree, and label them with parent-to-children and children-to-parent types, respectively.", "This allows our model to retain the original syntactic relation between words in the sentence.", "We also follow previous work to add self loop to each node in the graph, which we differentiate from dependency edges by introducing a third edge type.", "We adapt Relational GAT (RGAT) to capture this edge type information.", "Specifically, we encode the edge type information when computing the attention score between two nodes.", "We assign each edge type an embedding e R d h , incorporate it into attention score computation as follows ij = exp( ( aW ( h i (cid:107) h j ) + a e e ij )) (cid:80) v N i exp( ( aW ( h i (cid:107) h v ) + a e e iv )) (3) where e ij is the representation of the type of the edge connecting nodes i and j .", "and conduct average pooling to obtain h t R d h .", "Then we feed it into a two-layer MLP to calculate the final classification scores y s : y s = softmax( W 2 ReLU( W 1 h t )) (4) where W 2 RC d out and W 1 R d out d h denote learnable weight matrices, and C is the number of sentiment classes.", "We optimize the model to minimize the standard cross entropy loss function, and apply weight decay to model parameters.", "The initial word node features for RGAT are obtained from a BERT encoder, with positional information from positional embeddings.", "BERT Encoder.", "We use the pre-trained BERT base model as the encoder to obtain word representations.", "Specifically, we construct the input as [CLS] + sentence + [SEP] + term + [SEP] and feed it into BERT.", "This allows BERT to learn term-centric representations from the sentence during fine-tuning.", "To feed the resulting wordpiece-based representations into the word-based RGAT model, we average pool representations of subwords for each word to obtain X , the raw input to RGAT.", "Positional Encoding.", "Position information is beneficial for this task, especially when there are multiple aspect terms in one sentence, where it helps to locate opinion words relevant to an aspect term.", "Although the BERT encoder already takes the word position into consideration, it is dampened after layers of Transformers.", "Therefore, we explicitly encode the absolute position for each word and add it to the BERT output.", "Specifically, we add a trainable position embedding matrix to X before feeding the resulting representation into RGAT.", "SemEval 2014 Task 4 (14Rest and 14Lap) 2 and ACL 14 Twitter dataset (Twitter) (Dong et al., 2014).", "We remove several examples with con-flict sentiment polarity labels in the reviews.", "The statistics of these datasets are listed in Table 1. Following previous work, we report the accuracy and macro F1 scores for sentiment classification.", "For dependency-based approaches, we tokenize sentences with Stanford CoreNLP (Manning et al., 2014), and then parse them with CoreNLP, Stanza (Qi et al., 2020), and the Berkeley neural parser (Kitaev and Klein, 2018).", "Since the Berkeley parser returns constituency parses, we further convert it into dependency parses using CoreNLP.", "Baselines.", "We compare our GraphMerge model against published work on these benchmarks, including: BERT-SPC (Song et al., 2019) feeds the sentence and term pair into the BERT model and uses the BERT outputs for predictions; AEN-BERT (Song et al., 2019) uses BERT as the encoder and employs several attention layers.", "BERT + Dependency tree based models: DGEDT-BERT (Tang et al., 2020) proposes a mutual biaffine module to jointly consider the representations learnt from Transformer and the GNN model over the dependency tree; R-GAT+BERT (Wang et al., 2020b) reshapes and prunes the dependency tree to an aspect-oriented tree rooted at the aspect term, and then employs RGAT to encode the new tree for predictions.", "For fair comparison, we report the results of our GraphMerge model using the same data split (without a development set).", "To understand the behavior of different models, we also implement several baseline models.", "In our experiments, we randomly sample 5% training data as held-out development set for hyper-parameter tuning, use the remaining 95% for training and present results of the average and standard deviation numbers from five runs of random initialization on the test set.", "We consider these baselines: 1. BERT-baseline which feeds the sentence-term pair into the BERT-base encoder and then applies a classifier with the representation of the aspect term token.", "2. GAT-baseline with Stanza which employs a vanilla GAT model over single dependency tree obtained from Stanza without differentiating edge types.", "And the initial node features are the raw output of the BERT encoder.", "3. RGAT over single dependency trees , where we apply RGAT models with parent-to-children and child-to-parent edge types over different dependency trees from the CoreNLP, Stanza, and Berkeley parsers.", "For a fair comparison to our GraphMerge model, the RGAT input comes from BERT encoder plus position embeddings.", "4. Two ensemble models to take advantage of multiple dependency trees, including a Label-Ensemble model which takes the majority vote from three models each trained on one kind of parses, and a Feature-Ensemble model which applies three sets of RGAT parameters, one for each parse, on top of the BERT encoder with their output features concatenated.", "These models have more parameters and are more computationally expensive compared to the GraphMerge model when operating on the same parses.", "Parameter Setting.", "We use Pytorch (Paszke et al., 2019) to implement our models.", "The GAT implementation is based on Deep Graph Library (Wang et al., 2019).", "During training, we set the learning rate = 10 5 , batch size = 4 .", "We use dev data to select the hidden dimension d h for GAT/RGAT from { 64 , 128 , 256 } , the head number in the multi-head self-attention from { 4 , 8 } , and GAT/RGAT layer from { 2 , 3 , 4 } .", "The 2-layer GAT/RGAT models turn out to be the best based on the dev set.", "We apply dropout (Srivastava et al., 2014) and select the best setting from the dropout rate range = [0 . 1 , 0 . 3] .", "We set the weight of L2 regularization as 10 6 .", "We train the model up to 5 epochs.", "3 4.2 Results We first compare our model to previous work following the evaluation protocol in previous work, and report results in Table 2. As we can see, the GraphMerge model achieves best performances on all three datasets.", "On the Laptop dataset, the GraphMerge model further outperforms baselines by at least 1.42 accuracy and 2.34 Macro-F1 respectively.", "Ensemble models benefit from multiple parses.", "The Label-Ensemble , Feature-Ensemble , and GraphMerge models achieve better performance compared to their single dependency tree counterparts.", "This shows that ensemble models benefit from the presence of different parses and thus less sensitive to parse errors from any single parser.", "GraphMerge achieves the best performance overall.", "Our proposed GraphMerge model not only shows consistent improvements over all single dependency tree models, but also surpasses the other two ensemble models without additional parameters or computational overhead, when compared to the single-tree models.", "Note that although in this specific task, the best results are achieved using three trees in GraphMerge.", "The number of trees for ensemble depends on different tasks and datasets.", "We analyze the proposed GraphMerge model from two perspectives: an ablative analysis of model components and an analysis of the change in the", "Model components.", "We conduct ablation studies of our modeling for edge type and position information in Table 4. We observe that: (1) On three datasets, ablating the edge type degrades the performances.", "It indicates that the syntactic dependency information in original dependency trees is important.", "Differentiating edges in the ensemble graph provides more guidance to the model about selecting useful connections among nodes.", "(2) Removing the position embeddings hurts the performances as well.", "Although the BERT encoder already incorporates position information at its input, this information is dampened over the layers of Transformers.", "Emphasizing sequence order again before applying RGAT benefits the task.", "Edge Union vs. Edge Intersection.", "While GraphMerge keeps all edges from different dependency parsing trees for the RGAT model to learn to use, this could also result in too much structural noise and adversely impact performance.", "We therefore compare GraphMerge to edge intersection, which only retains edges that shared by all individual trees when constructing the ensemble graph, which can be thought of distilling syntactic Model 14Rest 14Lap Twitter Acc Macro-F1 Acc Macro-F1 Acc Macro-F1 GraphMerge 85.16 0.53 77.91 0.87 80.00 0.63 76.50 0.64 74.74 0.93 73.66 0.88 Edge type 84.25 0.59 76.15 1.24 78.65 0.51 74.76 0.71 74.37 1.08 73.25 0.85 Position 84.36 0.36 75.92 1.18 78.37 0.31 74.51 0.48 74.28 1.39 73.34 1.35 (Edge type + Position) 84.16 0.31 75.38 0.69 78.09 0.27 74.29 0.64 73.41 0.63 72.52 0.62 Edge Intersection 84.59 0.61 77.06 1.07 78.65 0.94 74.86 1.42 74.68 0.83 73.45 0.73 Table 4: Ablation study of the GraphMerge model over three datasets.", "(b) Accuracy w.r.t different hop number on 14Rest.", "information that an ensemble parser is confident about.", "We observe from the last row in Table 4 that edge intersection strategy underperforms GraphMerge on average accuracy and Marco-F1.", "We postulate that this is because edge intersection over-prunes edges in the ensemble graph and might introduce more disjoint connected components where parsers disagree, which the RGAT model cannot easily recover from.", "Effect of GraphMerge on Graph Structure.", "To better understand the effect of GraphMerge on dependency graphs, we conduct statistical analysis on the test set of 14Lap and 14Rest.", "Specifi-cally, we are interested in the change in the shortest distance between the aspect term and its opinion words on the dependency graphs.", "For this analysis, we use the test sets with opinion words labeled by Fan et al. (2019) (see Table 5 for dataset statistics).", "We summarize analysis results in Figure 3. We observe that: (1) Compared with single dependency tree, the ensemble graph effectively increases the number of one-hop and two-hops cases, meaning the overall distance between the term and opinion words is shortened on both datasets.", "(2) Shorter distance between the term and opinion words correlates with better performance.", "With the ensemble graph, the accuracy of one-hop and two-hops cases beats all single dependency tree models.", "These observations suggest that the ensemble graph from GraphMerge introduces important connectivity to help alleviate overparameterization from stacking RGAT layers, and that the RGAT model is able to make use of the diversity of edges in the resulting graph to improve classification performance.", "Note that although shortening distance correlates with improved results, it does not mean that the closer distance is sufficient for better performance.", "This is because although the BERT model can be seen as a GAT over a fully-connected graph where good great does items best brisket look does n't much outside like It does n't look like much on the outside , but the minute you walk inside, it's a whole other atmosphere.", "a word is reachable for all other context words within one hop (Wang et al., 2020a), the BERT-baseline model performs worse than dependency-based models.", "Therefore, encoding the syntactic structure information in dependency trees is crucial for this task.", "Our GraphMerge model achieves the best results by shortening the graph distance between the aspect term and opinion words with syntactic information.", "Case Study.", "To gain more insight into the GraphMerge model's behaviour, we find several examples and visualize their dependency trees from three parsers (Figure 4).", "Due to the space limit, we only show partial dependency trees that contain essential aspect terms and opinion words.", "These examples are selected from cases that all single dependency tree RGAT models predict incorrectly, but the GraphMerge model predicts correctly.", "We observe that in general, the three parsers do not agree in the neighborhood around the aspect term and opinion words in these sentences.", "As a result, GraphMerge tends to shorten the distance between the aspect term and the opinion words on the resulting graph.", "For instance, for all examples in Figure 4, the shortest distances between the aspect term and the opinion words are no more than two in the ensemble graphs, while they vary from 2 to 4 in the original parse trees.", "This could allow the RGAT model to capture the relation between the words without an exessive amount of layers, thus avoiding overfitting.", "On the other hand, we observe that the resulting ensemble graph from GraphMerge is more likely to contain the gold parse for the words in question.", "For instance, in the first two examples, the gold parse for the words visualized in the figure can be Dataset Positive Neutral Negative Total Laptop 883 407 587 1877 Restaurant 1953 473 1104 3530 Table 6: Statistics of robustness testing data ARTS.", "found in the ensemble graph (despite no individual parser predicting it in the first example); the third example also has a higher recall of gold parse edges than each parser despite being difficult to parse.", "This offers the RGAT model with the correct semantic relationship between these words in more examples during training and evaluation, which is often not accessible with those single parse trees.", "Aspect Robustness.", "To study the aspect robustness of the GraphMerge model, we test our model on the Aspect Robustness Test Set (ARTS) datasets proposed by Xing et al. (2020) (see Table 6 for statistics).", "The datasets enrich the original 14Lap and 14Rest datasets following three strategies: reverse the sentiment of the aspect term; reverse the sentiment of the non-target terms with originally the same sentiment as target term; generate more non-target aspect terms that have opposite sentiment polarities to the target one.", "They propose a novel metric, Aspect Robustness Score (ARS), that counts the correct classification of the source example and all its variations generated by the above three strategies as one unit of correctness.", "We compare three single dependency tree models with the GraphMerge model in Table 7.", "We directly evaluate the models trained on the original SemEval datasets on ARTS without further tuning.", "The results indicate that the GraphMerge model shows better aspect robustness than single dependency tree and BERT models.", "We propose a simple yet effective graph-ensemble technique, GraphMerge, to combine multiple dependency trees for aspect-level sentiment analysis.", "By taking the union of edges from different parsers, GraphMerge allows graph neural model to be robust to parse errors without additional parameters or computational cost.", "With different edge types to capture the original syntactic dependency in parse trees, our model outperforms previous state-of-the-art models, single-parse models, as well as traditional ensemble models on three aspect-level sentiment classification benchmark datasets.", "This work was supported by the National Key R&D Program of China under Grant No.2018YFB2100802 and No.2020AAA108600." ]
[ "abstain", "objective", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "other", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "other" ]
[ "Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.", "In recent years, LegalAI has drawn increasing attention rapidly from both AI researchers and legal professionals, as LegalAI is beneficial to the legal system for liberating legal professionals from a maze of paperwork.", "Legal professionals often think about how to solve tasks from rule-based and symbol-based methods, while NLP researchers concentrate more on data-driven and embedding methods.", "In this paper, we describe the history, the current state, and the future directions of research in LegalAI.", "We illustrate the tasks from the perspectives of legal professionals and NLP researchers and show several representative applications in LegalAI.", "We conduct experiments and provide an in-depth analysis of the advantages and disadvantages of existing works to explore possible future directions.", "You can find the implementation of our work from https://github.", "com/thunlp/CLAIM .", "Legal Artificial Intelligence (LegalAI) mainly focuses on applying artificial intelligence technology to help legal tasks.", "The majority of the resources in this field are presented in text forms, such as judgment documents, contracts, and legal opinions.", "Therefore, most LegalAI tasks are based on Natural Language Processing (NLP) technologies.", "LegalAI plays a significant role in the legal domain, as they can reduce heavy and redundant work for legal professionals.", "Many tasks in the legal domain require the expertise of legal practitioners and a thorough understanding of various legal documents.", "Retrieving and understanding legal documents take lots of time, even for legal professionals.", "Therefore, a qualified system of LegalAI should reduce the time consumption of these tedious jobs and benefit the legal system.", "Besides, LegalAI can also provide a reliable reference to those who are not familiar with the legal domain, serving as an affordable form of legal aid.", "In order to promote the development of LegalAI, many researchers have devoted considerable efforts over the past few decades.", "Early works (Kort, 1957; Ulmer, 1963; Nagel, 1963; Segal, 1984; Gardner, 1984) always use hand-crafted rules or features due to computational limitations at the time.", "In recent years, with rapid developments in deep learning, researchers begin to apply deep learning techniques to LegalAI.", "Several new LegalAI datasets have been proposed (Kano et al., 2018; Xiao et al., 2018; Duan et al., 2019; Chalkidis et al., 2019b,a), which can serve as benchmarks for research in the field.", "Based on these datasets, researchers began exploring NLP-based solutions to a variety of LegalAI tasks, such as Legal Judgment Prediction (Aletras et al., 2016; Luo et al., 2017; Zhong et al., 2018; Chen et al., 2019), Court View Generation (Ye et al., 2018), Legal Entity Recognition and Classification (Cardellino et al., 2017; ANGELIDIS et al., 2018), Legal Question Answering (Monroy et al., 2009; Taniguchi and Kano, 2016; Kim and Goebel, 2017), Legal Summarization (Hachey and Grover, 2006; Bhattacharya et al., 2019).", "As previously mentioned, researchers' efforts over the years led to tremendous advances in LegalAI.", "To summarize, some efforts concentrate on symbol-based methods, which apply interpretable hand-crafted symbols to legal tasks (Ash-ley, 2017; Surden, 2018).", "Meanwhile, other efforts with embedding-based methods aim at designing efficient neural models to achieve better performance (Chalkidis and Kampas, 2019).", "More specifically, symbol-based methods concentrate more on utilizing interpretable legal knowledge to reason Embedding-basedMethods Symbol-basedMethods Applications of LegalAI Concept Embedding Pretrained Language Model RelationExtraction EventTimeline ElementDetection Judgment Prediction Similar Case Matching Question Answering Text Summarization Concept Knowledge Graph Intentional Harm Arrested Alarm Escape Homicide Someone died?", "between symbols in legal documents, like events and relationships.", "Meanwhile, embedding-based methods try to learn latent features for prediction from large-scale data.", "The differences between these two methods have caused some problems in existing works of LegalAI.", "Interpretable symbolic models are not effective, and embedding-methods with better performance usually cannot be interpreted, which may bring ethical issues to the legal system such as gender bias and racial discrimination.", "The shortcomings make it difficult to apply existing methods to real-world legal systems.", "We summarize three primary challenges for both embedding-based and symbol-based methods in LegalAI: (1) Knowledge Modelling.", "Legal texts are well formalized, and there are many domain knowledge and concepts in LegalAI.", "How to utilize the legal knowledge is of great significance.", "(2) Legal Reasoning.", "Although most tasks in NLP require reasoning, the LegalAI tasks are somehow different, as legal reasoning must strictly follow the rules well-defined in law.", "Thus combining pre-defined rules and AI technology is essential to legal reasoning.", "Besides, complex case scenarios and complex legal provisions may require more sophisticated reasoning for analyzing.", "(3) Interpretability.", "Decisions made in LegalAI usually should be interpretable to be applied to the real legal system.", "Otherwise, fairness may risk being compromised.", "Interpretability is as important as performance in LegalAI.", "The main contributions of this work are concluded as follows: (1) We describe existing works from the perspectives of both NLP researchers and legal professionals.", "Moreover, we illustrate several embedding-based and symbol-based methods and explore the future direction of LegalAI.", "(2) We describe three typical applications, including judgment prediction, similar case matching, and legal question answering in detail to emphasize why these two kinds of methods are essential to LegalAI.", "(3) We conduct exhaustive experiments on multiple datasets to explore how to utilize NLP technology and legal knowledge to overcome the challenges in LegalAI.", "You can find the implementation from github 1 .", "(4) We summarize LegalAI datasets, which can be regarded as the benchmark for related tasks.", "The details of these datasets can be found from github 2 with several legal papers worth reading.", "First, we describe embedding-based methods in LegalAI, also named as representation learning.", "Embedding-based methods emphasize on representing legal facts and knowledge in embedding space, and they can utilize deep learning methods for corresponding tasks.", "Character and word embeddings play a significant role in NLP, as it can embed the discrete texts into", "continuous vector space.", "Many embedding methods have been proved effective (Mikolov et al., 2013; Joulin et al., 2016; Pennington et al., 2014; Peters et al., 2018; Yang et al., 2014; Bordes et al., 2013; Lin et al., 2015) and they are crucial for the effectiveness of the downstream tasks.", "In LegalAI, embedding methods are also essential as they can bridge the gap between texts and vectors.", "However, it seems impossible to learn the meaning of a professional term directly from some legal factual description.", "Existing works (Chalkidis and Kampas, 2019; Nay, 2016) mainly revolve around applying existing embedding methods like Word2Vec to legal domain corpora.", "To overcome the difficulty of learning professional vocabulary representations, we can try to capture both grammatical information and legal knowledge in word embedding for corresponding tasks.", "Knowledge modelling is significant to LegalAI, as many results should be decided according to legal rules and knowledge.", "Although knowledge graph methods in the legal domain are promising, there are still two major challenges before their practical usage.", "Firstly, the construction of the knowledge graph in LegalAI is complicated.", "In most scenarios, there are no ready-made legal knowledge graphs available, so researchers need to build from scratch.", "In addition, different legal concepts have different representations and meanings under legal systems in different countries, which also makes it challenging to construct a general legal knowledge graph.", "Some researchers tried to embed legal dictionaries (Cvrcek et al., 2012), which can be regarded as an alternative method.", "Secondly, a generalized legal knowledge graph is different in the form with those commonly used in NLP.", "Existing knowledge graphs concern the relationship between entities and concepts, but LegalAI focuses more on the explanation of legal concepts.", "These two challenges make knowledge modelling via embedding in LegalAI non-trivial, and researchers can try to overcome the challenges in the future.", "Pretrained language models (PLMs) such as BERT (Devlin et al., 2019) have been the recent focus in many fields in NLP (Radford et al., 2019; Yang et al., 2019; Liu et al., 2019a).", "Given the success of PLM, using PLM in LegalAI is also a very reasonable and direct choice.", "However, there are differences between the text used by existing PLMs and legal text, which also lead to unsatisfactory performances when directly applying PLMs to legal tasks.", "The differences stem from the terminology and knowledge involved in legal texts.", "To address this issue, Zhong et al. (2019b) propose a language model pretrained on Chinese legal documents, including civil and criminal case documents.", "Legal domain-specific PLMs provide a more qualified baseline system for the tasks of LegalAI.", "We will show several experiments comparing different BERT models in LegalAI tasks.", "For the future exploration of PLMs in LegalAI, researchers can aim more at integrating knowledge into PLMs.", "Integrating knowledge into pretrained models can help the reasoning ability between legal concepts.", "Lots of work has been done on integrating knowledge from the general domain into models (Zhang et al., 2019; Peters et al., 2019; Hayashi et al., 2019).", "Such technology can also be considered for future application in LegalAI.", "In this section, we describe symbol-based methods, also named as structured prediction methods.", "Symbol-based methods are involved in utilizing legal domain symbols and knowledge for the tasks of LegalAI.", "The symbolic legal knowledge, such as events and relationships, can provide interpretability.", "Deep learning methods can be employed for symbol-based methods for better performance.", "Information extraction (IE) has been widely studied in NLP.", "IE emphasizes on extracting valuable information from texts, and there are many NLP works which concentrate on IE, including name entity recognition (Lample et al., 2016; Kuru et al., 2016; Akbik et al., 2019), relation extraction (Zeng et al., 2015; Miwa and Bansal, 2016; Lin et al., 2016; Christopoulou et al., 2018), and event extraction (Chen et al., 2015; Nguyen et al., 2016; Nguyen and Grishman, 2018).", "IE in LegalAI has also attracted the interests of many researchers.", "To make better use of the particularity of legal texts, researchers try to use ontology (Bruckschen et al., 2010; Cardellino et al., 2017; Lenci et al., 2009; Zhang et al., 2017) or global consistency (Yin et al., 2018) for named entity recognition in LegalAI.", "To extract relationship and events from legal documents, researchers attempt to apply different NLP technologies, including hand-crafted rules (Bartolini et al., 2004; Truyens and Eecke, 2014), CRF (Vacek and Schilder, 2017), joint models like SVM, CNN, GRU (Vacek et al., 2019), or scale-free identifier network (Yan et al., 2017) for promising results.", "Existing works have made lots of efforts to improve the effect of IE, but we need to pay more attention to the benefits of the extracted information.", "The extracted symbols have a legal basis and can provide interpretability to legal applications, so we cannot just aim at the performance of methods.", "Here, we show two examples of utilizing the extracted symbols for interpretability of LegalAI: Relation Extraction and Inheritance Dispute .", "Inheritance dispute is a type of cases in Civil Law that focuses on the distribution of inheritance rights.", "Therefore, identifying the relationship between the parties is vital, as those who have the closest relationship with the deceased can get more assets.", "Towards this goal, relation extraction in inheritance dispute cases can provide the reason for judgment results and improve performance.", "Event Timeline Extraction and Judgment Prediction of Criminal Case .", "In criminal cases, multiple parties are often involved in group crimes.", "To decide who should be primarily responsible for the crime, we need to determine what everyone has done throughout the case, and the order of these events is also essential.", "For example, in the case of crowd fighting, the person who fights first should bear the primary responsibility.", "As a result, a qualified event timeline extraction model is required for judgment prediction of criminal cases.", "In future research, we need to concern more about applying extracted information to the tasks of LegalAI.", "The utilization of such information depends on the requirements of specific tasks, and the information can provide more interpretability.", "In addition to those common symbols in general NLP, LegalAI also has its exclusive symbols, named legal elements.", "The extraction of legal elements focuses on extracting crucial elements like whether someone is killed or something is stolen.", "These elements are called constitutive elements of crime, and we can directly convict offenders based on the results of these elements.", "Utilizing these elements can not only bring intermediate supervision information to the judgment prediction task but also make the prediction results of the model more interpretable.", "Towards a more in-depth analysis of element-based symbols, Shu et al. (2019) propose a dataset for extracting elements from three different kinds of cases, including divorce dispute, labor dispute, and loan dispute.", "The dataset requires us to detect whether the related elements are satisfied or not, and formalize the task as a multi-label classification problem.", "To show the performance of existing methods on element extraction, we have conducted experiments on the dataset, and the results can be found in Table 2.", "We have implemented several classical encoding models in NLP for element extraction, including TextCNN (Kim, 2014), DPCNN (John-son and Zhang, 2017), LSTM (Hochreiter and Schmidhuber, 1997), BiDAF (Seo et al., 2016), and BERT (Devlin et al., 2019).", "We have tried two different versions of pretrained parameters of BERT, including the origin parameters (BERT) and the parameters pretrained on Chinese legal documents (BERT-MS) (Zhong et al., 2019b).", "From the results, we can see that the language model pretrained on the general domain performs worse than domain-specific PLM, which proves the necessity of PLM in LegalAI.", "For the following parts of our paper, we will use BERT pretrained on legal documents for better performance.", "From the results of element extraction, we can find that existing methods can reach a promising performance on element extraction, but are still not sufficient for corresponding applications.", "These elements can be regarded as pre-defined legal knowledge and help with downstream tasks.", "How to improve the performance of element extraction is valuable for further research.", "In this section, we will describe several typical applications in LegalAI, including Legal Judgment Prediction, Similar Case Matching and Legal Question Answering.", "Legal Judgment Prediction and Similar Case Matching can be regarded as the core function of judgment in Civil Law and Common Law system, while Legal Question Answering can provide consultancy for those who are unfamiliar with the legal domain.", "Therefore, exploring these three tasks can cover most aspects of LegalAI.", "Legal Judgment Prediction (LJP) is one of the most critical tasks in LegalAI, especially in the Civil Law system.", "In the Civil Law system, the judgment results are decided according to the facts and the statutory articles.", "One will receive legal sanctions only after he or she has violated the prohibited acts prescribed by law.", "The task LJP mainly concerns how to predict the judgment results from both the fact description of a case and the contents of the statutory articles in the Civil Law system.", "As a result, LJP is an essential and representative task in countries with Civil Law system like France, Germany, Japan, and China.", "Besides, LJP has drawn lots of attention from both artificial intelligence researchers and legal professionals.", "In the following parts, we describe the research progress and explore the future direction of LJP.", "LJP has a long history.", "Early works revolve around analyzing existing legal cases in specific circumstances using mathematical or statistical methods (Kort, 1957; Ulmer, 1963; Nagel, 1963; Keown, 1980; Segal, 1984; Lauderdale and Clark, 2012).", "The combination of mathematical methods and legal rules makes the predicted results interpretable.", "To promote the progress of LJP, Xiao et al. (2018) have proposed a large-scale Chinese criminal judgment prediction dataset, C-LJP.", "The dataset contains over 2 .", "68 million legal documents published by the Chinese government, making C-LJP a qualified benchmark for LJP.", "C-LJP contains three subtasks, including relevant articles, applicable charges, and the term of penalty.", "The first two can be formalized as multi-label classification tasks, while the last one is a regression task.", "Besides, English LJP datasets also exist (Chalkidis et al., 2019a), but the size is limited.", "With the development of the neural network, many researchers begin to explore LJP using deep learning technology (Hu et al., 2018; Wang et al., 2019; Li et al., 2019b; Liu et al., 2019b; Li et al., 2019a; Kang et al., 2019).", "These works can be divided into two primary directions.", "The first one is to use more novel models to improve performance.", "Chen et al. (2019) use the gating mechanism to enhance the performance of predicting the term of penalty.", "Pan et al. (2019) propose multi-scale attention to handle the cases with multiple defendants.", "Besides, other researchers explore how to utilize legal knowledge or the properties of LJP.", "Luo et al. (2017) use the attention mechanism between facts and law articles to help the prediction of applicable charges.", "Zhong et al. (2018) present a topological graph to utilize the relationship between different tasks of LJP.", "Besides, Hu et al. (2018) incorporate ten discriminative legal attributes to help predict low-frequency charges.", "To better understand recent advances in LJP, we have conducted a series of experiments on C-LJP.", "Firstly, we implement several classical text classification models, including TextCNN (Kim, 2014), DPCNN (Johnson and Zhang, 2017), Dev Test Task Charge Article Term Charge Article Term Metrics MiF MaF MiF MaF Dis MiF MaF MiF MaF Dis TextCNN 93.8 74.6 92.8 70.5 1.586 93.9 72.2 93.5 67.0 1.539 DPCNN 94.7 72.2 93.9 68.8 1.448 94.9 72.1 94.6 69.4 1.390 LSTM 94.7 71.2 93.9 66.5 1.456 94.3 66.0 94.7 70.7 1.467 BERT 94.5 66.3 93.5 64.7 1.421 94.7 71.3 94.3 66.9 1.342 FactLaw 79.5 25.4 79.8 24.9 1.721 76.9 35.0 78.1 30.8 1.683 TopJudge 94.8 76.3 94.0 69.6 1.438 97.6 76.8 96.9 70.9 1.335 Gating Network --1.604 --1.553 Table 4: Experimental results of judgment prediction on C-LJP.", "LSTM (Hochreiter and Schmidhuber, 1997), and BERT (Devlin et al., 2019).", "For the parameters of BERT, we use the pretrained parameters on Chinese criminal cases (Zhong et al., 2019b).", "Secondly, we implement several models which are specially designed for LJP, including FactLaw (Luo et al., 2017), TopJudge (Zhong et al., 2018), and Gating Network (Chen et al., 2019).", "The results can be found in Table", "4. From the results, we can learn that most models can reach a promising performance in predicting high-frequency charges or articles.", "However, the models perform not well on low-frequency labels as there is a gap between micro-F1 and macro-F1.", "Hu et al. (2018) have explored few-shot learning for LJP.", "However, their model requires additional attribute information labelled manually, which is time-consuming and makes it hard to employ the model in other datasets.", "Besides, we can find that performance of BERT is not satisfactory, as it does not make much improvement from those models with fewer parameters.", "The main reason is that the length of the legal text is very long, but the maximum length that BERT can handle is 512 .", "According to statistics, the maximum document length is 56 , 694 , and the length of 15% documents is over 512 .", "Document understanding and reasoning techniques are required for LJP.", "Although embedding-based methods can achieve promising performance, we still need to consider combining symbol-based with embedding-based methods in LJP.", "Take TopJudge as an example, this model formalizes topological order between the tasks in LJP (symbol-based part) and uses TextCNN for encoding the fact description.", "By combining symbol-based and embedding-based methods, TopJudge has achieved promising results on LJP.", "Comparing the results between TextCNN and TopJudge, we can find that just integrating the order of judgments into the model can lead to improvements, which proves the necessity of combining embedding-based and symbol-based methods.", "For better LJP performance, some challenges require the future efforts of researchers: (1) Document understanding and reasoning techniques are required to obtain global information from extremely long legal texts.", "(2) Few-shot learning .", "Even low-frequency charges should not be ignored as they are part of legal integrity.", "Therefore, handling in-frequent labels is essential to LJP.", "(3) Interpretability .", "If we want to apply methods to real legal systems, we must understand how they make predictions.", "However, existing embedding-based methods work as a black box.", "What factors affected their predictions remain unknown, and this may introduce unfairness and ethical issues like gender bias to the legal systems.", "Introducing legal symbols and knowledge mentioned before will benefit the interpretability of LJP.", "In those countries with the Common Law system like the United States, Canada, and India, judicial decisions are made according to similar and representative cases in the past.", "As a result, how to identify the most similar case is the primary concern in the judgment of the Common Law system.", "In order to better predict the judgment results in the Common Law system, Similar Case Matching (SCM) has become an essential topic of LegalAI.", "SCM concentrate on finding pairs of similar cases, and the definition of similarity can be various.", "SCM requires to model the relationship between cases from the information of different granularity, like fact level, event level and element level.", "In other words, SCM is a particular form of semantic matching (Xiao et al., 2019), which can benefit the legal information retrieval.", "Traditional methods of Information Retrieve (IR) focus on term-level similarities with statistical models, including TF-IDF (Salton and Buckley, 1988) and BM25 (Robertson and Walker, 1994), which are widely applied in current search systems.", "In addition to these term matching methods, other researchers try to utilize meta-information (Medin, 2000; Gao et al., 2011; Wu et al., 2013) to capture semantic similarity.", "Many machine learning methods have also been applied for IR like SVD (Xu et al., 2010) or factorization (Rendle, 2010; Kabbur et al., 2013).", "With the rapid development of deep learning technology and NLP, many researchers apply neural models, including multi-layer per-ceptron (Huang et al., 2013), CNN (Shen et al., 2014; Hu et al., 2014; Qiu and Huang, 2015), and RNN (Palangi et al., 2016) to IR.", "There are several LegalIR datasets, including COLIEE (Kano et al., 2018), CaseLaw (Locke and Zuccon, 2018), and CM (Xiao et al., 2019).", "Both COLIEE and CaseLaw are involved in retrieving most relevant articles from a large corpus, while data examples in CM give three legal documents for calculating similarity.", "These datasets provide benchmarks for the studies of LegalIR.", "Many researchers focus on building an easy-to-use legal search engine (Barmakian, 2000; Turtle, 1995).", "They also explore utilizing more information, including citations (Monroy et al., 2013; Geist, 2009; Raghav et al., 2016) and legal concepts (Maxwell and Schafer, 2008; Van Opijnen and Santos, 2017).", "Towards the goal of calculating similarity in semantic level, deep learning methods have also been applied to LegalIR.", "Tran et al. (2019) propose a CNN-based model with document and sentence level pooling which achieves the state-of-the-art results on COLIEE, while other researchers explore employing better embedding methods for LegalIR (Landthaler et al., 2016; Sugathadasa et al., 2018).", "To get a better view of the current progress of LegalIR, we select CM (Xiao et al., 2019) for experiments.", "CM contains 8 , 964 triples where each triple contains three legal documents ( A, B, C ) .", "The task designed in CM is to determine whether B or C is more similar to A .", "We have implemented four different types of baselines: (1) Term matching methods, TF-IDF (Salton and Buckley, 1988).", "(2) Siamese Network with two parameter-shared encoders, including TextCNN (Kim, 2014), BiDAF (Seo et al., 2016) and BERT (Devlin et al., 2019), and a distance function.", "(3) Semantic matching models in sentence level, ABCNN (Yin et al., 2016), and document level, SMASH-RNN (Jiang et al., 2019).", "The results can be found in Table", "5. Model Dev Test TF-IDF 52.9 53.3 TextCNN 62.5 69.9 BiDAF 63.3 68.6 BERT 64.3 66.8 ABCNN 62.7 69.9 SMASH RNN 64.2 65.8 Table 5: Experimental results of SCM.", "From the results, we observe that existing neural models which are capable of capturing semantic information outperform TF-IDF, but the performance is still not enough for SCM.", "As Xiao et al. (2019) state, the main reason is that legal professionals think that elements in this dataset define the similarity of legal cases.", "Legal professionals will emphasize on whether two cases have similar elements.", "Only considering term-level and semantic-level similarity is insufficient for the task.", "For the further study of SCM, there are two directions which need future effort: (1) Elemental-based representation .", "Researchers can focus more on symbols of legal documents, as the similarity of legal cases is related to these symbols like elements.", "(2) Knowledge incorporation .", "As semantic-level matching is insufficient for SCM, we need to consider about incorporating legal knowledge into models to improve the performance and provide interpretability.", "Another typical application of LegalAI is Legal Question Answering (LQA) which aims at answering questions in the legal domain.", "One of the most important parts of legal professionals' work is to provide reliable and high-quality legal consulting services for non-professionals.", "However, due to the insufficient number of legal professionals, it is often challenging to ensure that non-professionals KD-Questions CA-Questions All Single All Single All Single All Unskilled Humans 76 .", "can get enough and high-quality consulting services, and LQA is expected to address this issue.", "In LQA, the form of questions varies as some questions will emphasize on the explanation of some legal concepts, while others may concern the analysis of specific cases.", "Besides, questions can also be expressed very differently between professionals and non-professionals, especially when describing domain-specific terms.", "These problems bring considerable challenges to LQA, and we conduct experiments to demonstrate the difficulties of LQA better in the following parts.", "In LegalAI, there are many datasets of question answering.", "Duan et al. (2019) propose CJRC, a legal reading comprehension dataset with the same format as SQUAD 2.0 (Rajpurkar et al., 2018), which includes span extraction, yes/no questions, and unanswerable questions.", "Besides, COLIEE (Kano et al., 2018) contains about 500 yes/no questions.", "Moreover, the bar exam is a professional qual-ification examination for lawyers, so bar exam datasets (Fawei et al., 2016; Zhong et al., 2019a) may be quite hard as they require professional legal knowledge and skills.", "In addition to these datasets, researchers have also worked on lots of methods on LQA.", "The rule-based systems (Buscaldi et al., 2010; Kim et al., 2013; Kim and Goebel, 2017) are prevalent in early research.", "In order to reach better performance, researchers utilize more information like the explanation of concepts (Taniguchi and Kano, 2016; Fawei et al., 2015) or formalize relevant documents as graphs to help reasoning (Monroy et al., 2009, 2008; Tran et al., 2013).", "Machine learning and deep learning methods like CRF (Bach et al., 2017), SVM (Do et al., 2017), and CNN (Kim et al., 2015) have also been applied to LQA.", "However, most existing methods conduct experiments on small datasets, which makes them not necessarily applicable to massive datasets and real scenarios.", "We select JEC-QA (Zhong et al., 2019a) as the dataset of the experiments, as it is the largest dataset collected from the bar exam, which guarantees its difficulty.", "JEC-QA contains 28 , 641 multiple-choice and multiple-answer questions, together with 79 , 433 relevant articles to help to answer the questions.", "JEC-QA classifies questions into knowledge-driven questions (KD-Questions) and case-analysis questions (CA-Questions) and reports the performances of humans.", "We implemented several representative question answering models, including BiDAF (Seo et al., 2016), BERT (Devlin et al., 2019), Co-matching (Wang et al., 2018), and HAF (Zhu et al., 2018).", "The experimental results can be found in Table", "6. From the experimental results, we can learn the models cannot answer the legal questions well compared with their promising results in open-domain question answering and there is still a huge gap between existing models and humans in LQA.", "For more qualified LQA methods, there are several significant difficulties to overcome: (1) Legal multi-hop reasoning .", "As Zhong et al. (2019a) state, existing models can perform inference but not multi-hop reasoning.", "However, legal cases are very complicated, which cannot be handled by singlestep reasoning.", "(2) Legal concepts understanding .", "We can find that almost all models are better at case analyzing than knowledge understanding, which proves that knowledge modelling is still challenging for existing methods.", "How to model legal knowledge to LQA is essential as legal knowledge is the foundation of LQA.", "In this paper, we describe the development status of various LegalAI tasks and discuss what we can do in the future.", "In addition to these applications and tasks we have mentioned, there are many other tasks in LegalAI like legal text summarization and information extraction from legal contracts.", "Nevertheless, no matter what kind application is, we can apply embedding-based methods for better performance, together with symbol-based methods for more interpretability.", "Besides, the three main challenges of legal tasks remain to be solved.", "Knowledge modelling, legal reasoning, and interpretability are the foundations on which LegalAI can reliably serve the legal domain.", "Some existing methods are trying to solve these problems, but there is still a long way for researchers to go.", "In the future, for these existing tasks, researchers can focus on solving the three most pressing challenges of LegalAI combining embedding-based and symbol-based methods.", "For tasks that do not yet have a dataset or the datasets are not large enough, we can try to build a large-scale and high-quality dataset or use few-shot or zero-shot methods to solve these problems.", "Furthermore, we need to take the ethical issues of LegalAI seriously.", "Applying the technology of LegalAI directly to the legal system will bring ethical issues like gender bias and racial discrimination.", "The results given by these methods cannot convince people.", "To address this issue, we must note that the goal of LegalAI is not replacing the legal professionals but helping their work.", "As a result, we should regard the results of the models only as a reference.", "Otherwise, the legal system will no longer be reliable.", "For example, professionals can spend more time on complex cases and leave the simple cases for the model.", "However, for safety, these simple cases must still be reviewed.", "In general, LegalAI should play as a supporting role to help the legal system.", "This work is supported by the National Key Research and Development Program of China (No. 2018YFC0831900) and the National Natural Science Foundation of China (NSFC No. 61772302, 61532010).", "Besides, the dataset of element extraction is provided by Gridsum." ]
[ "abstain", "abstain", "abstain", "abstain", "result", "objective", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "other" ]
[ "In real-world scenarios, a text classification task often begins with a cold start , when labeled data is scarce.", "In such cases, the common practice of fine-tuning pre-trained models, such as BERT, for a target classification task, is prone to produce poor performance.", "We suggest a method to boost the performance of such models by adding an intermediate unsupervised classification task, between the pretraining and fine-tuning phases.", "As such an intermediate task, we perform clustering and train the pre-trained model on predicting the cluster labels.", "We test this hypothesis on various data sets, and show that this additional classification phase can significantly improve performance, mainly for topical classification tasks, when the number of labeled instances available for fine-tuning is only a couple of dozen to a few hundred.", "The standard paradigm for text classification relies on supervised learning, where it is well known that the size and quality of the labeled data strongly im-pact the performance (Raffel et al., 2019).", "Hence, developing a text classifier in practice typically requires making the most of a relatively small set of annotated examples.", "The emergence of transformer-based pre-trained language models such as BERT (Devlin et al., 2018) has reshaped the NLP landscape, leading to significant advances in the performance of most NLP tasks, text classification included (e.g., Nogueira and Cho, 2019; Ein-Dor et al., 2020).", "These models typically rely on pretraining with massive and heterogeneous corpora on a general Masked Language Modeling ( MLM ) task, i.e., predicting a word that is masked in the original text.", "Later on, the obtained model is fine-tuned to the actual task of interest, termed here the target task , These authors contributed equally to this work.", "using the labeled data available for this task.", "Thus, pretrained models serve as general sentence encoders which can be adapted to a variety of target tasks (Lacroix et al., 2019; Wang et al., 2020a).", "Our work focuses on a challenging yet common scenario, where unlabeled data is available but labeled data is scarce.", "In many real-world scenarios, obtaining even a couple of hundred of labeled examples per class is challenging.", "Commonly, a target class has a relatively low prior in the examined data, making it a formidable goal to collect enough positive examples for it (Japkowicz and Stephen, 2002).", "Moreover, sometimes data cannot be labeled via crowd-annotation platforms due to its confidentiality (be it for data privacy reasons or for protecting intellectual property) or since the labeling task requires special expertise.", "On top of this, often the number of categories to be considered is relatively large, e.g., 50 , thus making even a modest demand of 200 labeled examples per class a task of labeling 10K instances, which is inapplicable in many practical cases (for an extreme example, cf. Partalas et al., 2015).", "In such limited real-world settings, fine-tuning a large pretrained model often yields far from optimal performance.", "To overcome this, one may take a gradual approach composed of various phases.", "One possibility is to further pretrain the model with the self-supervised MLM task over unlabeled data taken from the target task domain (Whang et al., 2019).", "Alternatively, one can train the pretrained model using a supervised intermediate task which is different in nature from the target-task, and for which labeled data is more readily available (Pruk-sachatkun et al., 2020; Wang et al., 2019a; Phang et al., 2018).", "Each of these steps is expected to provide a better starting point for the final finetuning phase, performed over the scarce labeled data available for the target task, aiming to end up with improved performance.", "egy that exploits unsupervised text clustering as the intermediate task towards fine-tuning a pretrained model for text classification.", "Our work is inspired by the use of clustering to obtain labels in computer vision (Gidaris et al., 2018; Kolesnikov et al., 2019).", "Specifically, we use an efficient clustering technique, that relies on simple Bag Of Words (BOW) representations, to partition the unlabeled training data into relatively homogeneous clusters of text instances.", "Next, we treat these clusters as labeled data for an intermediate text classification task, and train the pre-trained model with or without additional MLM pretraining with respect to this multi-class problem, prior to the final fine-tuning over the actual target-task labels.", "Extensive experimental results demonstrate the practical value of this strategy on a variety of benchmark data.", "We further analyze the results to gain insights as to why and when this approach would be most valuable, and conclude that it is most prominently when the training data available for the target task is relatively small and the classification task is of a topical nature.", "Finally, we propose future directions.", "1 2 Intermediate Training using Unsupervised Clustering A pre-trained model is typically developed in consecutive phases.", "We release code for reproducing our method.", "Henceforth, we will refer to BERT as the canonical example of such models.", "First, the model is pretrained over massive general corpora with the MLM task.", "2 We denote the obtained model simply as BERT .", "Second, BERT is finetuned in a supervised manner with the available labeled examples for the target task at hand.", "This standard flow is represented via Path-1 in Fig. 1.", "An additional phase can be added between these two, referred to next as intermediate training , or inter-training in short.", "In this phase, the model is exposed to the corpus of the target task, or a corpus of the same domain, but still has no access to labeled examples for this task.", "A common example of such an intermediate phase is to continue to intertrain BERT using the self-supervised MLM task over the corpus or the domain of interest, sometimes referred to as further 1 https://github.com/IBM/ intermediate-training-using-clustering 2 BERT was originally also pretrained over \"next sentence prediction\"; however, later works (Yang et al., 2019; Liu et al., 2019b) have questioned the contribution of this additional task and focused on MLM.", "or adaptive pre-training (e.g., Gururangan et al., 2020).", "This flow is represented via Path-2 in Fig. 1, and the resulting model is denoted BERT IT:MLM , standing for Intermediate Task: MLM.", "A key contribution of this paper is to propose a new type of intermediate task, which is designed to be aligned with a text classification target task, and is straightforward to use in practice.", "The underlying intuition is that inter-training the model over a related text classification task would be more beneficial compared to MLM inter-training, which focuses on different textual entities, namely predicting the identity of a single token.", "Specifically, we suggest unsupervised clustering for generating pseudo-labels for inter-training.", "As the clustering partition presumably captures information about salient features in the corpus, feeding this information into the model could lead to representations that are better geared to perform the target task.", "These pseudo-labels can be viewed as weak labels, but importantly they are not tailored nor require a specific design per target task.", "Instead, we suggest generating pseudo-labels in a way in-dependent of the target classification task.", "The respective flow is represented via Path-3 in Fig. 1.", "In this flow, we first cluster to partition the training data into n c clusters.", "Next, we use the obtained partition as labeled' data in a text classification task, where the classes are defined via the n c clusters, and intertrain BERT to predict the cluster label.", "In line with MLM, inter-training includes a classifier layer on top of BERT, which is discarded before the fine-tuning stage.", "The resulting inter-trained model is denoted BERT IT:CLUST .", "Finally, Path-4 in Fig. 1 represents a sequential composition of Paths 2 and 3.", "In this flow, we first intertrain BERT with the MLM task.", "Next, the obtained model is further intertrained to predict the n c clusters, as in Path-3.", "The model resulting from this hybrid approach is denoted BERT IT:MLM+CLUST .", "Importantly, following Path-3 or Path-4 requires no additional labeled data, and involves an a-priori clustering of training instances that naturally gives rise to an alternative or an additional inter-training task.", "As we show in the following sections, despite its simplicity, this strategy provides a significant boost in performance, especially when labeled data for the final fine-tuning is in short supply.", "We evaluate over 6 topical datasets and 3 non-topical ones (see Table 1), which cover a variety of classification tasks and domains: Yahoo! Answers (Zhang et al., 2015), which separates answers and questions to types; DBpedia (Zhang et al., 2015, CC-BY-SA) which differentiates entity types by their Wikipedia articles; AG's News (Zhang et al., 2015) which categorize news articles; CFPB, which classifies consumer complaints; 20 newsgroups (Lang, 1995), which classifies 20 Usenet discussion groups; ISEAR (Shao et al., 2015, CC BY-NC-SA 3.0), which considers personal reports for emotion; SMS spam (Almeida et al., 2011), which identifies spam messages; Polarity (Pang and Lee, 2005), which includes sentiment analysis on movie reviews, and Subjectivity (Pang and Lee, 2004), which categorizes movie snippets as subjective or objective.", "A topical dataset splits sentences by a high-level distinction related to what the sentence is about (e.g., sports vs. economics).", "Non-topical datasets look for finer stylistic distinctions that may depend on the way the sentence is written or on fine details rather than on the central meaning it discusses.", "It may also separate almost identical sentences; for example, \"no\" could distinguish between sentences with negative and positive sentiment.", "When no split is provided we apply a 70% / 10% / 20% train-dev-test split, respectively.", "3 To reduce the computational cost over the larger datasets (DBpedia, AG's News, Yahoo! Answers and CFPB) we trim the train/test sets of these datasets to 15 K/ 3 K instances respectively, by randomly sampling from each set.", "4 All runs and all methods use only the trimmed versions.", "In our main set of experiments, we compare the performance of fine-tuning BERT-based models over a target task, for different settings of intermediate training.", "We consider four BERT-based settings, as described in Section 2 and in Figure 1.", "Two baselines", "(i) BERT, without intermediate training, and", "(ii) BERT IT:MLM intertrained on MLM; and two settings that rely on clustering", "(i) BERT IT:CLUST , where predicting cluster labels is used for inter-training, and", "(ii) BERT IT:MLM+CLUST , which combines the two intermediate tasks.", "Training samples: For each setting, the final fine-tuning for the target task is performed, per dataset, for training budgets varying between 64 and 1024 labeled examples.", "For each data size x , the experiment is repeated 5 times; each repetition representing a different sampling of x labeled examples from the train set.", "The samplings of training examples are shared between all settings.", "That is, for a given dataset and train size the final training for all settings is done with respect to the same 5 samples of labeled examples.", "Inter-training: Intermediate training, when done, was performed over the unlabeled train set for each dataset (ignoring instances' labels).", "We studied two implementations for the clustering task: K-means (Lloyd, 1982) and sequential Information Bottleneck (sIB) which is known to obtain better results in practice (Slonim et al., 2002) and in theory (Slonim et al., 2013).", "Based on initial experiments, and previous insights from works in the computer vision domain (Yan et al., 2020) we opted for a relatively large number of clusters, and rather than optimizing the number of clusters per dataset, set it to 50 for all cases.", "5 K-means was run over GloVe (Pennington et al., 2014) representations following word stemming.", "We used a publicly available implementation of sIB 6 with its default configuration (i.e., 10 restarts and a maximum of 15 iterations for every single run).", "For sIB clustering, we used Bag of Words (BOW) representations on a stemmed text with the default vocabulary size (which is defined as the 10K most frequent words in the dataset).", "Our results indicate that inter-training with respect to sIB clusters consistently led to better results in the final performance on the target task, compared to inter-training with respect to the clusters obtained with K-means (see Section 5.1 for details).", "We also considered inter-training only on representative examples of clustering results filtering a given amount of outlier examples but obtained no significant gain (data not shown).", "Note that the run time of the clustering algorithms is only a few seconds.", "The run time of the fine-tuning step of the inter-training task takes five and a half minutes for the largest train set (15K instances) on a Tesla V100-PCIE-16GB GPU.", "5 Setting the number of clusters to be equal to the number of classes resulted in inferior accuracy.", "In addition, one may not know how many classes truly exist in the data, so this parameter is not necessarily known in real-world applications.", "6 https://github.com/IBM/sib Train Test # classes Yahoo! answers 15K 3K 10 DBpedia 15K 3K 14 CFPB 15K 3K 15 20 newsgroups 10.2K 7.5K 20 AG's news 15K 3K 4 ISEAR 5.4K 1.5K 7 SMS spam 3.9K 1.1K 2 Subjectivity 7K 2K 2 Polarity 7.5K 2.1K 2 Table 1: Dataset details.", "BERT hyper-parameters: The starting point of all settings is the BERTBASE model (110M pa-rameters).", "BERT inter-training and fine-tuning runs were all performed using the Adam optimizer (Kingma and Ba, 2015) with a standard setting consisting of a learning rate of 3 10 5 , batch size 64 , and maximal sequence length 128 .", "In a practical setting with a limited annotations budget one cannot assume that a labeled dev set is available, thus in all settings we did not use the dev set, and fine-tuning was arbitrarily set to be over 10 epochs, always selecting the last epoch.", "For inter-training over the clustering results we used a single epoch, for two reasons.", "First, loosely speaking, additional training over the clusters may drift the model too far towards learning the partition into clusters, which is an auxiliary task in our context, and not the real target task.", "Second, from the perspective of a practitioner, single epoch training is preferred since it is the least demanding in terms of run time.", "For BERT IT:MLM we used 30 epochs with a replication rate of 5 , and followed the masking strategy from Devlin et al. (2018).", "7 Computational budget: Overall we report the results of 1440 BERT fine-tuning runs ( 4 experimental settings 9 datasets 8 labeling budgets 5 repetitions).", "In addition, we performed 288 inter-training epochs over the full datasets ( 9 datasets ( 30 BERT IT:MLM epochs + 1 BERT IT:CLUST epoch + 1 BERT IT:MLM+CLUST epoch)).", "In total, this would equate to about 60 hours on a single Tesla V100-PCIE-16GB GPU.", "Table 2 depicts the results over all datasets, focusing on the practical use case of a budget of 64", "7 In preliminary experiments we found this to be the best configuration for this baseline.", "samples for fine-tuning (128 for 20 newsgroup, see explanation in Fig. 2).", "As shown in the table, the performance gains of BERT IT:CLUST are mainly reflected in the 6 topical datasets.", "For these datasets, BERT IT:CLUST confers a significant benefit in accuracy ( 110% accuracy gain, 33% error reduction).", "Figure 2 depicts the classification accuracy for the different settings for varying labeling budgets, using sIB for clustering-based inter-training.", "Over the topical datasets, BERT IT:CLUST and BERT IT:MLM+CLUST clearly outperform BERT and BERT IT:MLM in the small labeled data regime, where the gain is most prominent for the smallest labeled data examined when only 64 labeled examples are available and gradually diminishes as more labeled samples are added.", "We performed paired t-tests to compare BERT IT:CLUST with BERT and BERT IT:MLM , pooling together all datasets and repetitions for a given Dataset BERTaccuracy BERT IT:CLUST accuracy Gain Error reduction Yahoo! Answers 21.2 45.9 117% 31% DBpedia 31.2 67.0 115% 52% CFPB 15.0 27.5 83% 15% 20 newsgroup 13.0 47.2 263% 39% AG's News 61.9 80.7 30% 49% ISEAR 19.0 29.0 53% 12% avg.", "7643 Train size 64 128 192 256 384 512 > 512 vs. BERT 1 10 6 1 10 6 6 10 7 2 10 5 2 10 3 9 10 3 vs. BERT IT:MLM 8 10 5 3 10 3 4 10 2 Table 3: Paired t-test p-values (after Bonferroni correction) of classification accuracy for BERT IT:CLUST compared to BERT and to BERT IT:MLM (insignificant results, p 0 . 05 , are denoted by ).", "labeling budget.", "As can be seen in Tab.", "3, the performance gain, over all datasets, of BERT IT:CLUST over BERT is statistically significant for a budget up to 512 .", "BERT IT:CLUST is not as successful in the 3 non-topical datasets (cf. Tab. 2 and Fig. 2).", "A possible reason for the lack of success of inter-training in these three datasets is that their classification task is different in nature than the tasks in the other six datasets.", "Identifying spam messages, determining whether a text is subjective or objective, or analyzing the sentiment (polarity) of texts, can be based on stylistic distinctions that may depend on the way the sentence is written rather than on the central topic it discusses.", "Inter-training over BOW clustering seems to be less beneficial when such considerations are needed.", "We further analyze this in Section 5.4.", "Nevertheless, it is safe to apply BERT IT:CLUST even in these datasets, as results are typically comparable to the baseline algorithms, neither better nor worse.", "Both BERT IT:MLM and BERT IT:CLUST expose the model to the target corpus.", "The performance gains of BERT IT:CLUST over BERT IT:MLM suggest that inter-training on top of the clustering carries an additional benefit.", "In addition, these inter-training approaches are complementary as seen in Fig. 2, BERT IT:MLM+CLUST outperforms both BERT IT:CLUST and BERT IT:MLM (at the cost of some added runtime).", "Taken together, our results suggest that in topical datasets, where labeled data is scarce, the pseudo-labels generated via clustering can be leveraged to provide a better starting point for a pre-trained model towards its fine-tuning for the target task.", "In the literature (Slonim et al., 2002) and on our initial trials, sIB showed better clustering performance, and therefore was chosen over other clustering methods.", "Next, we analyze whether sIB is also the best fit for inter-training.", "We compare (see App. C) sIB over BOW representation to two other clustering configurations; K-means over GloVe representations and Hartigan's K-means (Slonim et al., 2013) over GloVe.", "For most datasets, inter-training over the results of sIB over BOW representations achieved the best results.", "Our inter-training method relies on BOW-based clustering.", "Since knowledge of the input words is potentially quite powerful for some text classification tasks, we examine the performance of several BOW-based methods.", "We used the same training samples to train multinomial Naive Bayes (NB) and Support Vector Machine (SVM) classifiers, using either Bag of Words (BOW) or GloVe (Penning-ton et al., 2014) representations.", "For GloVe, a text is represented as the average GloVe embeddings of its tokens.", "This yielded four reference settings: NBBOW , NB GloVe , SVMBOW and SVM GloVe .", "Overall, all four methods were inferior to BERT IT:CLUST , as shown in App.", "B. Thus, the success of our method cannot simply be attributed to the information in the BOW representations.", "The embeddings after BERT IT:CLUST show potential as a better starting point for fine-tuning.", "Figure 3 depicts t-SNE (van der Maaten and Hinton, 2008) 2D visualizations of the output embeddings over the full train set of several datasets, comparing the [CLS] embeddings before and after inter-training.", "Manifestly, for topical datasets, the BERT IT:CLUST embeddings, obtained after inter-training with respect to sIB clusters, induce a much clearer separation between the target classes, even though no labeled data was used to obtain this model.", "Moreover, and perhaps not surprisingly, the apparent visual separation resulting from inter-training is aligned with the performance gain obtained later on in the fine-tuning phase 7644 Figure 3: t-SNE visualizations of model embeddings over the train set, using BERT (top) vs. BERT IT:CLUST (bottom).", "over the target task (as seen, for instance, in the visualizations of Polarity versus DBpedia data).", "In addition to the qualitative results of the visualization, we pursue a more quantitative path.", "We assess whether examples of the same class are more closely represented after inter-training.", "Formally, given a set of instances' embeddings e 1 , . . . , e n and their corresponding class labels l 1 , . . . , l n L we compute for each class l L a centroid c l which is the average embedding of this class.", "We then compute the average Euclidean Embeddings' Distance ( ED ) from the corresponding centroids: 8 ED ( l, e ) = E ni =0 (cid:107) e i c i (cid:107) 2 As a sanity check, we apply a significance test to the ED statistic, confirming that representations of same-class examples are close to each other.", "Specifically, we apply a permutation test (Fisher, 1971), with 1000 repetitions, comparing the class labels to random labels.", "We find that EDs for both BERT and BERT IT:CLUST are significantly different from random ( p < 0 . 001 ).", "This implies that both before and after inter-training, same-class representations are close.", "Next, we compare the representations before and after inter-training.", "We find that the randomly permuted EDs of BERT IT:CLUST are about 3 times larger than BERT's, despite similar norm values.", "This means that the post inter-training representations are more dispersed.", "Hence, to properly compare, we normalize ED by the average of the 8 Macro average results were similar, we hence report only micro average results.", "Where S n is a permutation out of S n the set of all permutations.", "Comparing the Normalized Embeddings' Distance ( NED ) before and after inter-training, we find that in all datasets the normalized distance is smaller after inter-training.", "In other words, BERT IT:CLUST brings same-class representations closer in comparison to BERT.", "A natural explanation for the contribution of inter-training to BERT's performance is that the pseudo-labels, obtained via the clustering partition, are informative with regards to target task labels.", "To quantify this intuition, in Figure 4 we depict the Normalized Mutual Information (NMI) between sIB labels and the target task labels, calculated over the entire training set, versus the gain of using BERT IT:CLUST , reflected as the reduction in classification error rate between BERT and BERT IT:CLUST , at the extreme case of 64 fine-tuning samples.", "Evidently, in datasets where the NMI is around zero, BERT IT:CLUST does not confer a clear benefit; conversely, where the NMI is relatively high, the performance gains are pronounced as well.", "Notably, the three datasets with the lowest NMI are those for which inter-training was not beneficial, as discussed in Section 4.", "Since the partition obtained via clustering is often informative for the target class labels, we examine whether it can be utilized directly, as opposed 7645 to as pseudo-labels for BERT inter-training.", "To that end, we applied a simple heuristic.", "Given a labeling budget x , we divide it across clusters, relative to their size, while ensuring that at least one instance within each of the 50 clusters is labeled.", "We use the budget per cluster to reveal the labels of a random sample of examples in that cluster, and identify each cluster with its most dominant label.", "Next, given a new test example, we assign it with the label associated with its nearest cluster.", "Results (see App. B) showed that this rudimentary classifier is generally not on par with BERT IT:CLUST , yet it can be surprisingly effective where the NMI is high and the labeling budget is small.", "In our work, we transfer a pretrained model to a new domain with little data.", "Transfer learning studies how to transfer models across domains.", "It suggests methods such as pivoting (Ziser and Reichart, 2018), weak supervision (Shnarch et al., 2018), data augmentation (Anaby-Tavor et al., 2020) and adversarial transfer (Cao et al., 2018).", "In Computer Vision, pretrained models are often learnt by image clustering (Caron et al., 2018).", "In NLP, however, clustering was mainly used for non-transfer scenarios.", "Ball (2019) relies on pretrained embeddings to cluster labeled and unlabeled data.", "Then, they fill the missing labels to augment the training data.", "Clustering itself was improved by combining small amounts of data (Torres and Vaca, 2019; Wang et al., 2016).", "Pretrained models improved state-of-the-art in many downstream tasks (Nogueira and Cho, 2019; Ein-Dor et al., 2020) and they are especially needed and useful in low resource and limited labeled data settings (Lacroix et al., 2019; Wang et al., 2020a; Chau et al., 2020).", "There are many suggestions to improve such models, including larger models (Raf-fel et al., 2019), changes in the pretraining tasks and architecture (Yang et al., 2019), augmenting pretraining (Geva et al., 2020), or improving the transfer itself (Valipour et al., 2019; Wang et al., 2019b; Sun et al., 2019; Xu et al., 2020).", "Two findings on pretraining support our hypothesis on the intermediate task, namely that classification surpasses MLM.", "Some pretraining tasks are better than others (Lan et al., 2020; Raffel et al., 2019) and supervised classification as additional pre-training improves performance (Lv et al., 2020; Wang et al., 2019a; Pruksachatkun et al., 2020).", "All these works aim to improve the performance upon transfer, making it more suitable for any new domain.", "In contrast, we focus on improvement given the domain.", "With a transferred model, one can further improve performance with domain-specific information.", "For example, utilizing metadata (Melamud et al., 2019), training on weakly-supervised data (Raisi and Huang, 2018; Meng et al., 2020) or multitasking on related tasks concurrently (Liu et al., 2019a).", "Given no domain-specific information, it was suggested to further pretrain on unlabeled data from the domain (Whang et al., 2019; Xu et al., 2019; Sung et al., 2019; Rietzler et al., 2020; Lee et al., 2020; Gururangan et al., 2020).", "This, however, is sometimes unhelpful or even hurts results (Pan, 2019).", "Transferring a model and retraining with paucity of labels is often termed few-shot learning.", "Few shot learning is used for many language-related tasks such as named entity recognition (Wang et al., 2020b), relation classification (Hui et al., 2020), and parsing (Schuster et al., 2019).", "There have also been suggestions other than fine-tuning the model.", "Koch (2015) suggests ranking examples' similarity with Siamese networks.", "Vinyals et al. (2016) rely on memory and attention to find neighboring examples and Snell et al. (2017) search for prototypes to compare to.", "Ravi and Larochelle (2017) don't define in advance how to compare the examples.", "Instead, they meta-learn how to train the few shot learner.", "These works addressed the image classification domain, but they supply general methods 7646 which are used, improved and adapted on language domains (Geng et al., 2019; Yu et al., 2018).", "In conclusion, separate successful practices foreshadow our findings: Clustering drives pretraining on images; supervised classification aids pre-training; and training on unlabeled domain examples is helpful with MLM.", "We presented a simple approach for improving pretrained models for text classification.", "Specifically, we show that inter-training BERT over pseudo-labels generated via unsupervised clustering creates a better starting point for the final fine-tuning over the target task.", "Our analyses suggest that BERT can leverage these pseudo-labels, namely that there exists a beneficial interplay between the proposed inter-training and the later fine-tuning stage.", "Our results show that this approach yields a significant boost in accuracy, mainly over topical data and when labeled data is scarce.", "Note that the method does require the existence of an unlabeled corpus, in the order of several thousand examples.", "We opted here for a practically oriented approach, which we do not claim to be optimal.", "Rather, the success of this approach suggests various directions for future work.", "In particular, several theoretical questions arise, such as what else determines the success of the approach in a given dataset; understanding the potential synergistic effect of BOW-based clustering for inter-training; could more suitable partitions be acquired by exploiting additional embedding space and/or more clustering techniques; co-training (Blum and Mitchell, 1998) methods, and more.", "On the practical side, while in this work we fixed the inter-training to be over 50 clusters and for a single epoch, future work can improve performance by tuning such hyper-parameters.", "In addition, one may consider using the labeled data available for fine-tuning as anchors for the intermediate clustering step, which we have not explored here.", "Another point to consider is the nature of the inter-training task.", "Here, we examined a multi-class setup where BERT is trained to predict one out of n c cluster labels.", "Alternatively, one may consider a binary inter-training task, where BERT is trained to determine whether two samples are drawn from the same cluster or not.", "In principle, inter-training BERT over clustering results may be valuable for additional downstream target tasks, that are similar in spirit to standard text classification.", "Examples include Key-Point Analysis (Bar-Haim et al., 2020) and Textual Entailment (Dagan et al., 2013).", "The potential value of our approach in such cases is left for future work.", "Any use of a language model for classification involves some risk of bias, which stems from the pre-training and training data used to construct the model.", "Here we aim to improve the language model representations by relying on clustering of data from the target domain.", "We have no reason to believe this process would introduce bias beyond the potential bias that can occur whenever fine-tuning a model, but this is a potential risk, as we did not verify this directly.", "We thank Assaf Toledo for providing helpful advice on the clustering implementations." ]
[ "abstain", "abstain", "method", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "result", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "result", "result", "objective", "result", "abstain", "abstain", "abstain", "abstain", "result", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other" ]
[ "We investigate the problem of Chinese Grammatical Error Correction (CGEC) and present a new framework named Tail-to-Tail ( TtT ) non-autoregressive sequence prediction to address the deep issues hidden in CGEC.", "Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected based on the bidirectional context information, thus we employ a BERT-initialized Transformer Encoder as the backbone model to conduct information modeling and conveying.", "Considering that only relying on the same position substitution cannot handle the variable-length correction cases, various operations such substitution, deletion, insertion, and local paraphrasing are required jointly.", "Therefore, a Conditional Random Fields (CRF) layer is stacked on the up tail to conduct non-autoregressive sequence prediction by modeling the token dependencies.", "Since most tokens are correct and easily to be predicted/conveyed to the target, then the models may suffer from a severe class imbalance issue.", "To alleviate this problem, focal loss penalty strategies are integrated into the loss functions.", "Moreover, besides the typical fix-length error correction datasets, we also construct a variable-length corpus to conduct experiments.", "Experimental results on standard datasets, especially on the variable-length datasets, demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of error Detection and Correction 1 .", "Grammatical Error Correction (GEC) aims to automatically detect and correct the grammatical errors that can be found in a sentence (Wang et al., 2020c).", "It is a crucial and essential application task 1 Code: https://github.com/lipiji/TtT (cid:7625)(cid:10280)(cid:12560)(cid:12318)(cid:8495)(cid:26589)(cid:25726)(cid:27095)(cid:8307)(cid:28424) I feel fly long happy today!", "in many natural language processing scenarios such as writing assistant (Ghufron and Rosyida, 2018; Napoles et al., 2017; Omelianchuk et al., 2020), search engine (Martins and Silva, 2004; Gao et al., 2010; Duan and Hsu, 2011), speech recognition systems (Karat et al., 1999; Wang et al., 2020a; Kubis et al., 2020), etc.", "Grammatical errors may appear in all languages (Dale et al., 2012; Xing et al., 2013; Ng et al., 2014; Rozovskaya et al., 2015; Bryant et al., 2019), in this paper, we only focus to tackle the problem of Chinese Grammatical Error Correction (CGEC) (Chang, 1995).", "We investigate the problem of CGEC and the related corpora from SIGHAN (Tseng et al., 2015) and NLPCC (Zhao et al., 2018) carefully, and we conclude that the grammatical error types as well as the corresponding correction operations can be categorised into three folds, as shown in Figure 1: (1) Substitution .", "In reality, Pinyin is the most popular input method used for Chinese writings.", "Thus, the homophonous character confusion (For example, in the case of Type I, the pronunciation of the wrong and correct words are both FeiChang) is the fundamental reason which causes grammatical errors (or spelling errors) and can be corrected by substitution operations without changing the whole sequence structure (e.g., length).", "Thus, substitution is a fixed-length (FixLen) operation .", "(2) Deletion (cid:7625)(cid:10280)(cid:12560)(cid:12318)(cid:8495)(cid:26589)(cid:25726)(cid:27095)(cid:8307) (cid:28424) I feel fly long happy today!", "and Insertion .", "These two operations are used to handle the cases of word redundancies and omissions respectively.", "(3) Local paraphrasing .", "Sometimes, light operations such as substitution, deletion, and insertion cannot correct the errors directly, therefore, a slightly subsequence paraphrasing is required to reorder partial words of the sentence, the case is shown in Type III of Figure 1.", "Deletion, insertion, and local paraphrasing can be regarded as variable-length (VarLen) operations because they may change the sentence length.", "However, over the past few years, although a number of methods have been developed to deal with the problem of CGEC, some crucial and essential aspects are still uncovered.", "Generally, sequence translation and sequence tagging are the two most typical technical paradigms to tackle the problem of CGEC.", "Benefiting from the development of neural machine translation (Bahdanau et al., 2015; Vaswani et al., 2017), attention-based seq2seq encoder-decoder frameworks have been introduced to address the CGEC problem in a sequence translation manner (Wang et al., 2018; Ge et al., 2018; Wang et al., 2019, 2020b; Kaneko et al., 2020).", "Seq2seq based translation models are easily to be trained and can handle all types of correcting operations above mentioned.", "However, considering the exposure bias issue (Ranzato et al., 2016; Zhang et al., 2019), the generated results usually suffer from the phenomenon of hallucination (Nie et al., 2019; Maynez et al., 2020) and cannot be faithful to the source text, even though copy mechanisms (Gu et al., 2016) are incorporated (Wang et al., 2019).", "Therefore, Omelianchuk et al. (2020) and Liang et al. (2020) propose to purely employ tagging to conduct the problem of GEC instead of generation.", "All correcting operations such as deletion, insertion, and substitution can be guided by the predicted tags.", "Nevertheless, the pure tagging strategy requires to extend the vocabulary V to about three times by adding insertionand substitutionprefixes to the original tokens (e.g., insertion-good, substitution-paper) which decrease the computing efficiency dramatically.", "Moreover, the pure tagging framework needs to conduct multi-pass prediction until no more operations are predicted, which is ineffi-cient and less elegant.", "Recently, many researchers fine-tune the pre-trained language models such as BERT on the task of CGEC and obtain reasonable results (Zhao et al., 2019; Hong et al., 2019; Zhang et al., 2020b).", "However, limited by the BERT framework, most of them can only address the fixed-length correcting scenarios and cannot conduct deletion, insertion, and local paraphrasing operations flexibly.", "Moreover, during the investigations, we also observe an obvious but crucial phenomenon for CGEC that most words in a sentence are correct and need not to be changed.", "This phenomenon is depicted in Figure 2, where the operation flow is from the bottom tail to the up tail.", "Grey dash lines represent the Keep operations and the red solid lines indicate those three types of correcting operations mentioned above.", "On one side, intuitively, the target CGEC model should have the ability of directly moving the correct tokens from bottom tail to up tail, then Transformer(Vaswani et al., 2017) based encoder (say BERT) seems to be a preference.", "On the other side, considering that almost all typical CGEC models are built based on the paradigms of sequence tagging or sequence translation, Maximum Likelihood Estimation (MLE) (Myung, 2003) is usually used as the parameter learning approach, which in the scenario of CGEC, will suffer from a severe class/tag imbalance issue.", "However, no previous works investigate this problem thoroughly on the task of CGEC.", "To conquer all above-mentioned challenges, we propose a new framework named tail-to-tail non-(cid:1876) (cid:2869) (cid:1876) (cid:2870) (cid:1876) (cid:2871) <mask> <mask> <eos> E m b e dd i ng L a y e r M u lti H ea d S e l f-A tt e n ti on L a y e r F ee d -F o r w a r d L a y e r (cid:1867) (cid:2869) (cid:1877) (cid:2869) <eos> (cid:1867) (cid:2870) (cid:1867) (cid:2871) (cid:1867) (cid:2872) (cid:1867) (cid:2873) (cid:1867) (cid:2874) (cid:1877) (cid:2870) (cid:1877) (cid:2871) (cid:1877) (cid:2872) (cid:1877) (cid:2873) Input Bottom Tail Transformer Layers Conditional Random Fields (CRF) Up Tail Output Nx Figure 3: The proposed tail-to-tail non-autoregressive sequence prediction framework (TtT).", "autoregressive sequence prediction, which abbreviated as TtT , for the problem of CGEC.", "Specifically, to directly move the token information from the bottom tail to the up tail, a BERT based sequence encoder is introduced to conduct bidirectional representation learning.", "In order to conduct substitution, deletion, insertion, and local paraphrasing simultaneously, inspired by (Sun et al., 2019), a Conditional Random Fields (CRF) (Lafferty et al., 2001) layer is stacked on the up tail to conduct non-autoregressive sequence prediction by modeling the dependencies among neighbour tokens.", "Focal loss penalty strategy (Lin et al., 2020) is adopted to alleviate the class imbalance problem considering that most of the tokens in a sentence are not changed.", "In summary, our contributions are as follows: A new framework named tail-to-tail non-autoregressive sequence prediction (TtT) is proposed to tackle the problem of CGEC.", "BERT encoder with a CRF layer is employed as the backbone, which can conduct substitution, deletion, insertion, and local paraphrasing simultaneously.", "Focal loss penalty strategy is adopted to alleviate the class imbalance problem considering that most of the tokens in a sentence are not changed.", "Extensive experiments on several benchmark datasets, especially on the variable-length grammatical correction datasets, demonstrate the effectiveness of the proposed approach.", "Figure 3 depicts the basic components of our proposed framework TtT.", "Input is an incorrect sentence X = ( x 1 , x 2 , . . . , x T ) which contains grammatical errors, where x i denotes each token (Chi-nese character) in the sentence, and T is the length of X .", "The objective of the task grammatical error correction is to correct all errors in X and generate a new sentence Y = ( y 1 , y 2 , . . . , y T (cid:48) ) .", "Here, it is important to emphasize that T is not necessary equal to T (cid:48) .", "Therefore, T (cid:48) can be = , > , or < T .", "Bidirectional semantic modeling and bottom-to-up directly token information conveying are conducted by several Transformer (Vaswani et al., 2017) layers.", "A Conditional Random Fields (CRF) (Lafferty et al., 2001) layer is stacked on the up tail to conduct the non-autoregressive sequence generation by modeling the dependencies among neighboring tokens.", "Low-rank decomposition and beamed Viterbi algorithm are introduced to accelerate the computations.", "Focal loss penalty strategy (Lin et al., 2020) is adopted to alleviate the class imbalance problem during the training stage.", "Since the length T (cid:48) of the target sentence Y is not necessary equal to the length T of the input sequence X .", "Then in the training and inference stage, different length will affect the completeness of the predicted sentence, especially when T < T (cid:48) .", "To handle this issue, several simple tricks are designed to pre-process the samples.", "Assuming X = ( x 1 , x 2 , x 3 , <eos> ) : (1) When T = T (cid:48) , i.e., Y = ( y 1 , y 2 , y 3 , <eos> ) , then do nothing; (2) When T > T (cid:48) , say Y = ( y 1 , y 2 , <eos> ) , which means that some tokens in X will be deleted during correcting.", "Then in the training stage, we can pad T T (cid:48) special tokens <pad> to the tail of Y to make T = T (cid:48) , then Y = ( y 1 , y 2 , <eos> , <pad> ); (3) When T < T (cid:48) , say Y = ( y 1 , y 2 , y 3 , y 4 , y 5 , <eos> ) , which means that more information should be inserted into the original sentence X .", "Then, we will pad the special symbol <mask> to the tail of X to indicate that these positions possibly can be translated into some new real tokens: X = ( x 1 , x 2 , x 3 , <eos> , <mask> , <mask> ) .", "Transformer layers (Vaswani et al., 2017) are particularly well suited to be employed to conduct the bidirectional semantic modeling and bottom-to-up information conveying.", "As shown in Figure 3, after preparing the input samples, an embedding layer and a stack of Transformer layers initialized with a pre-trained Chinese BERT (Devlin et al., 2019) are followed to conduct the semantic modeling.", "Specifically, for the input, we first obtain the representations by summing the word embeddings with the positional embeddings: H 0 t = E w t + E p t (1) where 0 is the layer index and t is the state index.", "E w and E p are the embedding vectors for tokens and positions, respectively.", "Then the obtained embedding vectors H 0 are fed into several Transformer layers.", "Multi-head self-attention is used to conduct bidirectional representation learning: H 1 t = LN (cid:0) FFN ( H 1 t ) + H 1 t (cid:1) H 1 t = LN (cid:0) SLF-ATT ( Q 0 t , K 0 , V 0 ) + H 0 t (cid:1) Q 0 = H 0 WQK 0 , V 0 = H 0 WK , H 0 WV (2) where SLF-ATT ( ), LN ( ), and FFN ( ) represent self-attention mechanism, layer normalization, and feed-forward network respectively (Vaswani et al., 2017).", "Note that our model is a non-autoregressive sequence prediction framework, thus we use all the sequence states K 0 and V 0 as the attention context.", "Then each node will absorb the context information bidirectionally.", "After L Transformer layers, we obtain the final output representation vectors HL R max ( T,T (cid:48) ) d .", "Direct Prediction The objective of our model is to translate the input sentence X which contains grammatical errors into a correct sentence Y .", "Then, since we have obtained the sequence representation vectors HL , we can directly add a softmax layer to predict the results, just similar to the methods used in non-autoregressive neural machine translation (Gu and Kong, 2020) and BERT-based fine-tuning framework for the task of grammatical error correction (Zhao et al., 2019; Hong et al., 2019; Zhang et al., 2020b).", "Specifically, a linear transformation layer is plugged in and softmax operation is utilized to generate a probability distribution P dp ( y t ) over the target vocabulary V : s t = h (cid:62) t W s + b s P dp ( y t ) = softmax ( s t ) (3) where h t R d , W s R d |V| , b s R |V| , and s t R |V| .", "Then we obtain the result for each state based on the predicted distribution: y (cid:48) t = argmax ( P dp ( y t )) (4) However, although this direct prediction method is effective on the fixed-length grammatical error correction problem, it can only conduct the same-positional substitution operation.", "For complex correcting cases which require deletion, insertion, and local paraphrasing, the performance is unacceptable.", "This inferior performance phenomenon is also discussed in the tasks of non-autoregressive neural machine translation (Gu and Kong, 2020).", "One of the essential reasons causing the inferior performance is that the dependency information among the neighbour tokens are missed.", "Therefore, dependency modeling should be called back to improve the performance of generation.", "Naturally, linear-chain CRF (Lafferty et al., 2001) is introduced to fix this issue, and luckily, Sun et al. (2019) also employ CRF to address the problem of non-autoregressive sequence generation, which inspired us a lot.", "the input sequence X , under the CRF framework, the likelihood of the target sequence Y with length T", "where Z ( X ) is the normalizing factor and s ( y t ) represents the label score of y at position t , which can be obtained from the predicted logit vector s t R |V| from Eq.", "(3), i.e., s t ( V y t ) , where V y t is the vocabulary index of token y t .", "The value t ( y t 1 , y t ) = M y t 1 ,y t denotes the transition score from token y t 1 to y t where M R |V||V| is the transition matrix, which is the core term to conduct dependency modeling.", "Usually, M can be learnt as neural network parameters during the end-to-end training procedure.", "However, |V| is typically very large especially in the text generation scenarios (more than 32 k ), therefore it is infeasible to obtain M and Z ( X ) efficiently in practice.", "To overcome this obstacle, as the method used in (Sun et al., 2019), we introduce two low-rank neural parameter metrics E 1 , E 2 R |V| d m to approximate the full-rank transition matrix M by: M = E 1 E (cid:62) 2 (6) where d m (cid:28) |V| .", "To compute the normalizing factor Z ( X ) , the original Viterbi algorithm (For-ney, 1973; Lafferty et al., 2001) need to search all paths.", "To improve the efficiency, here we only visit the truncated topk nodes at each time step approximately (Sun et al., 2019).", "Considering the characteristic of the directly bottom-to-up information conveying of the task CGEC, therefore, both tasks, direct prediction and CRF-based dependency modeling, can be incorporated jointly into a unified framework during the training stage.", "The reasons are that, intuitively, direct prediction will focus on the fine-grained predictions at each position, while CRF-layer will pay more attention to the high-level quality of the whole global sequence.", "We employ Maximum Likelihood Estimation (MLE) to conduct parameter learning and treat negative log-likelihood (NLL) as the loss function.", "Thus, the optimization objective for direct prediction L dp is: L dp = T (cid:48) (cid:88) t =1 log P dp (y t | X ) (7) And the loss function L crf for CRF-based dependency modeling is: L crf = log P crf ( Y | X ) (8) Then the final optimization objective is: L = L dp + L crf (9) As mentioned in Section 1, one obvious but crucial phenomenon for CGEC is that most words in a sentence are correct and need not to be changed.", "Considering that maximum likelihood estimation is used as the parameter learning approach in those two tasks, then a simple copy strategy can lead to a sharp decline in terms of loss functions.", "Then, intuitively, the grammatical error tokens which need to be correctly fixed in practice, unfortunately, attract less attention during the training procedure.", "Actually, these tokens, instead, should be regarded as the focal points and contribute more to the optimization objectives.", "However, no previous works investigate this problem thoroughly on the task of CGEC.", "To alleviate this issue, we introduce a useful trick, focal loss (Lin et al., 2020) , into our loss functions for direct prediction and CRF: L fldp = T (cid:48) (cid:88) t =1 (1 P dp (y t | X )) log P dp (y t | X ) L flcrf = (1 P crf ( Y | X )) log P crf ( Y | X ) (10) where is a hyperparameter to control the penalty weight.", "It is obvious that L fldp is penalized on the token level, while L flcrf is weighted on the sample level and will work in the condition of batch-training.", "The final optimization objective with focal penalty strategy is: L fl = L fldp + L flcrf (11) 2.6 Inference During the inference stage, for the input source sentence X , we can employ the original |V| nodes Viterbi algorithm to obtain the target global optimal result.", "We can also utilize the truncated topk Viterbi algorithm for high computing efficiency (Sun et al., 2019).", "The core technical components of our proposed TtT is Transformer (Vaswani et al., 2017) and CRF (Laf-ferty et al., 2001).", "The pre-trained Chinese BERT-base model (Devlin et al., 2019) is employed to initialize the model.", "To approximate the transition matrix in the CRF layer, we set the dimension d of matrices E 1 and E 2 as 32.", "For the normalizing factor Z ( X ) , we set the predefined beam size k as 64.", "The hyperparameter which is used to weight the focal penalty term is set to 0 .", "5 after parameter tuning.", "Training batch-size is 100, learning rate is 1 e 5 , dropout rate is 0 .", "1 .", "Adam optimizer (Kingma and Ba, 2015) is used to conduct the parameter learning.", "used in our experiments are depicted in Table 1.", "SIGHAN15 (Tseng et al., 2015) 2 This is a benchmark dataset for the evaluation of CGEC and it contains 2,339 samples for training and 1,100 samples for testing.", "As did in some typical previous works (Wang et al., 2019; Zhang et al., 2020b), we also use the SIGHAN15 testset as the benchmark dataset to evaluate the performance of our models as well as the baseline methods in fixed-length (FixLen) error correction settings.", "HybirdSet (Wang et al., 2018) 3 It is a newly released dataset constructed according to a prepared confusion set based on the results of ASR (Yu and Deng, 2014) and OCR (Tong and Evans, 1996).", "This dataset contains about 270k paired samples and it is also a FixLen dataset.", "TtTSet Considering that datasets of SIGHAN15 and HybirdSet are all FixLen type datasets, in order to demonstrate the capability of our model TiT on the scenario of Variable-Length (VarLen) CGEC, based on the corpus of HybirdSet , we 2 http://ir.itc.ntnu.edu.tw/lre/ sighan8csc.html 3 https://github.com/wdimmy/ Automatic-Corpus-Generation build a new VarLen dataset.", "Specifically, operations of deletion, insertion, and local shuffling are conducted on the original sentences to obtain the incorrect samples.", "Each operation covers one-third of samples, thus we get about 540k samples finally.", "We compare the performance of TtT with several strong baseline methods on both FixLen and VarLen settings.", "NTOU employs n-gram language model with a reranking strategy to conduct prediction (Tseng et al., 2015).", "NCTU-NTUT also uses CRF to conduct label dependency modeling (Tseng et al., 2015).", "HanSpeller++ employs Hidden Markov Model with a reranking strategy to conduct the prediction (Zhang et al., 2015).", "Hybrid utilizes LSTM-based seq2seq framework to conduct generation (Wang et al., 2018) and Confusionset introduces a copy mechanism into seq2seq framework (Wang et al., 2019).", "FASPell incorporates BERT into the seq2seq for better performance (Hong et al., 2019).", "SoftMask-BERT firstly conducts error detection using a GRU-based model and then incorporating the predicted results with the BERT model using a soft-masked strategy (Zhang et al., 2020b).", "Note that the best results of SoftMask-BERT are obtained after pre-training on a large-scale dataset with 500M paired samples.", "SpellGCN proposes to incorporate phonological and visual similarity knowledge into language models via a specialized graph convolutional network (Cheng et al., 2020).", "Chunk proposes a chunk-based decoding method with global optimization to correct single character and multi-character word typos in a unified framework (Bao et al., 2020).", "We also implement some classical methods for comparison and ablation analysis, especially for the VarLen correction problem.", "Transformer-s2s is the typical Transformer-based seq2seq framework for sequence prediction (Vaswani et al., 2017).", "GPT2-finetune is also a sequence generation framework fine-tuned based on a pre-trained Chinese GPT2 model 4 (Radford et al., 2019; Li, 2020).", "BERT-finetune is just fine-tune the Chinese BERT model on the CGEC corpus directly.", "Beam search decoding strategy is employed to con-4 https://github.com/lipiji/Guyu Model Detection Correction ACC .", "duct generation for Transformer-s2s and GPT2-finetune, and beam-size is 5.", "Note that some of the original methods above mentioned can only work in the FixLen settings, such as SoftMask-BERT and BERT-finetune .", "Following the typical previous works (Wang et al., 2019; Hong et al., 2019; Zhang et al., 2020b), we employ sentence-level Accuracy , Precision , Recall , and F1-Measure as the automatic metrics to evaluate the performance of all systems 5 .", "We also report the detailed results for error Detection (all locations of incorrect characters in a given sentence should be completely identical with the gold standard) and Correction (all locations and corresponding corrections of incorrect characters should be completely identical with the gold standard) respectively (Tseng et al., 2015).", "Table 2 depicts the main evaluation results of our proposed framework TtT as well as the comparison baseline methods.", "It should be emphasized 5 http://nlp.ee.ncu.edu.tw/resource/csc.", "that SoftMask-BERT is pre-trained on a 500M-size paired dataset.", "Our model TtT, as well as the baseline methods such as Transformer-s2s, GPT2-finetune, BERT-finetune, and Hybird are all trained on the 270k-size HybirdSet.", "Nevertheless, TtT obtains improvements on the tasks of error Detection (F1: 77 . 7 81 . 6 ) and Correction (F1: 75 . 9 80 . 0 ) compared to all strong baselines on F1 metric, which indicates the superiority of our proposed approach.", "Benefit from the CRF-based dependency modeling component, TtT can conduct deletion, insertion, local paraphrasing operations jointly to address the Variable-Length (VarLen) error correction problem.", "The experimental results are described in Table 3.", "Considering that those sequence generation methods such as Transformer-s2s and GPT2-finetune can also conduct VarLen correction operation, thus we report their results as well.", "From the results, we can observe that TtT can also achieve a superior performance in the VarLen scenario.", "The reasons are clear: BERT-finetune as well as the related methods are not appropriate in VarLen scenario, especially when the target is longer than the input.", "The text generation models such as Transformer-s2s and GPT2-finetune suffer from the problem of hallucination (Maynez et al., 2020) and repetition, TrainSet Model Detection Correction ACC .", "which are not steady on the problem of CGEC.", "Different Training Dataset Recall that we introduce several groups of training datasets in different scales as depicted in Table 1.", "It is also very interesting to investigate the performances on different-size datasets.", "Then we conduct training on those training datasets and report the results still on the SIGHAN2015 testset.", "The results are shown in Table 4.", "No matter what scale of the dataset is, TtT always obtains the best performance.", "Impact of L dp and L crf Table 5 describes the performance of our model TtT and the variants without L dp (TtT w/o L dp ) and L crf (TtT w/o L crf ).", "We can conclude that the fusion of these two tasks, direct prediction and CRF-based dependency modeling, can indeed improve the performance.", "Parameter Tuning for Focal Loss The focal loss penalty hyperparameter is crucial for the loss function L = L dp + L crf and should be adjusted on the specific tasks (Lin et al., 2020).", "We conduct grid search for (0 , 0 . 1 , 0 . 5 , 1 , 2 , 5) and the corresponding results are provided in Table 6.", "Finally, we select = 0 .", "5 for TtT for the CGEC task.", "Practically, CGEC is an essential and useful task and the techniques can be used in many real applications such as writing assistant, post-processing of ASR and OCR, search engine, etc.", "Therefore, the time cost efficiency of models is a key point which needs to be taken into account.", "Table 7 depicts the time cost per sample of our model TtT and Model Time (ms) Speedup Transformer-s2s 815.40 1x GPT2-finetune 552.82 1.47x TtT 39.25 20.77x BERT-finetune 14.72 55.35x Table 7: Comparisons of the computing efficiency.", "some baseline approaches.", "The results demonstrate that TtT is a cost-effective method with superior prediction performance and low computing time complexity, and can be deployed online directly.", "We propose a new framework named tail-to-tail non-autoregressive sequence prediction, which abbreviated as TtT , for the problem of CGEC.", "A BERT based sequence encoder is introduced to conduct bidirectional representation learning.", "In order to conduct substitution, deletion, insertion, and local paraphrasing simultaneously, a CRF layer is stacked on the up tail to conduct non-autoregressive sequence prediction by modeling the dependencies among neighbour tokens.", "Low-rank decomposition and a truncated Viterbi algorithm are introduced to accelerate the computations.", "Focal loss penalty strategy is adopted to alleviate the class imbalance problem considering that most of the tokens in a sentence are not changed.", "Experimental results on standard datasets demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure on tasks of error Detection and Correction.", "TtT is of low computing complexity and can be deployed online directly.", "In the future, we plan to introduce more lexical analysis knowledge such as word segmentation and fine-grained named entity recognition (Zhang et al., 2020a) to further improve the performance." ]
[ "objective", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result" ]
[ "The current aspect extraction methods suffer from boundary errors.", "These errors lead to a relatively minor difference between the extracted aspects and the ground-truth.", "However, they hurt the performance severely.", "In this paper, we propose to utilize a pointer network for repositioning the boundaries.", "Recycling mechanism is used which enables the training data to be collected without manual intervention.", "We conduct the experiments on the benchmark datasets SE14 of laptop and SE14-16 of restaurant.", "Experimental results show that our method achieves substantial improvements over the baseline, and outperforms state-of-the-art methods.", "Aspect extraction (Hu and Liu, 2004) is a crucial task in the field of real-world aspect-oriented sentiment analysis, where an aspect stands for a sequence of tokens which adhere to a specific sentiment word, in general, serving as the target on which people express their views.", "For example, the tokens twist on pizza is the aspect of the opinion healthy in 1).", "In this paper, we concentrate on the study of aspect extraction conditioned on the unawareness of sentiment words.", "1) Their twist on pizza is healthy .", "Ground-truth : twist on pizza Predicted : [BOUND] pizza [BOUND]", "2) Buy the separate RAM memory and you will have a rocket .", "Ground-truth : RAM memory Predicted : [BOUND] separate RAM memory [BOUND] What is undoubtedly true is that the existing neural aspect extraction methods (Section 5.3) have achieved remarkable success to some extent.", "The peak performance on the benchmark datasets, to our best knowledge, is up to 85.61% F1-score (Li et al., 2018).", "We suggest that further improvements can be made by fine-tuning the boundaries of the extracted aspects.", "It is so because some incorrectly-extracted aspects result from minor boundary errors, where the boundaries refer to the start and end positions of a token sequence.", "For example, reinstating the omitted words twist on and trimming the redundant word separate in", "1) and", "2) by changing the start positions contributes to the recall of the correct aspects.", "We propose to utilize a pointer network for repositioning the boundaries (Section 2).", "The pointer network is separately trained, and it is only used to post-process the resultant aspects output by a certain extractor (Section 3).", "Supervised learning is pre-requisite for obtaining a well-trained pointer network.", "However, so far, there is a lack of boundary-misspecified negative examples to construct the training set.", "Instead of manually labeling negative examples, we recycle those occurring during the time when the extractor is trained (Section 4).", "Our contributions in this paper are as follows: By means of a pointer network, we refine the boundary-misspecified aspects.", "The separately-trained pointer network serves as a post-processor and therefore can be easily coupled with different aspect extractors.", "The use of recycling mechanism facilitates the process of constructing the training set.", "We train a pointer network to predict the start and end positions of the correct aspect.", "What we feed into the network include a candidate aspect and the sentence which contains the candidate (herein called source sentence).", "The candidate may be a boundary-misspecified aspect, truly-correct aspect or other text span.", "The network outputs two words w s and w e , one of which is predicted to be the start position, the other the end: (cid:40) w s = arg max", ".p s ( w s ) w e = arg max", ".p e ( w e ) (1) where, P (*) denotes the probability that a word serves as the start or end position, and arg max refers to the maximum likelihood estimation.", "The text span which lies between the start and end positions w s and w e will be eventually selected as the boundary-repositioned aspect.", "It is noteworthy that, during testing, the status (boundary-misspecified, truly-correct or other) of the candidate aspect is assumed to be unknown.", "This is derived from the consideration of the practical situation in which the status of the pre-extracted aspect is unforeseeable.", "Encoding Assume C = { w 1 , ..., w n } represents the candidate aspect, where w ci R l stands for the combination of the word, position and segment embeddings of the i -th token in C .", "The source sentence is represented in the same way and denoted by U = { w 1 ,..., w m } .", "We concatenate C and U to construct the input representation: WC U = [CLS , C, SEP , U, SEP] (2) where, CLS denotes the embedding of a dummy variable, while SEP is that of a separator (Devlin et al., 2019).", "In our experiments, WordPiece embeddings are used which can be obtained from the lookup table of Wu et al. (2016).", "The embeddings of position, segment, separator and dummy variable are initialized randomly.", "We encode each element w i in the input representation WC U by fine-tuning BERT (Devlin et al., 2019): h i = BERT( w i ) , i [1, n + m +3].", "Decoding Due to the use of the multi-head self-attention mechanism (Vaswani et al., 2017), BERT is able to perceive and more heavily weight the attentive words in the source sentence U , according to the information in the candidate aspect C , and vice versa.", "This property allows the attention-worthy words out of C to be salvaged and meanwhile enables the attention-unworthy words in C to be laid aside.", "On the other hand, a trainable decoder tends to learn the consistency between the ground-truth aspect and the attentive words.", "Therefore, we suppose that the decoder is able to leave the boundaries of C unchanged if C aligns with the ground-truth aspect, otherwise redefine the boundaries in U in terms of the attentive words.", "Following the practice in prior research (Vinyals et al., 2015), we decode the representation h i with a linear layer and the softmax function, where W R 2 l and b R 2 are trainable parameters: (cid:20) p s ( w i ) p e ( w i ) (cid:21) = softmax( W h i + b ) (3) Training Our goal is to assign higher probabilities to the start and end positions w s and w e for all the ground-truth aspects in the training set.", "Therefore, we measure loss by calculating the average negative log-likelihood for all pairs of w s and w e : LB = 1 NB NB (cid:88) i =1 (cid:104) log p s ( w si ) + log p e ( w ei ) 2 (cid:105) (4) where, NB is the number of ground-truth aspects.", "During training, we obtain the parameters W and b in equation (3) by minimizing the loss LB .", "We use the pointer network to post-process the pre-extracted aspects (which are referred to the candidate aspects in section 2).", "In our experiments, we employ a BiLSTM-CRF model to obtain the candidate aspects.", "In this case, we solve aspect pre-extraction as a sequence labeling task.", "BIO labeling space y = { B , I , O } (Xu et al., 2018) is specified as the output for each token in the source sentence, in which B , I and O respectively signal the beginning of an aspect, inside of an aspect and non-aspect word.", "First of all, we represent the tokens in the source sentence using GloVe embeddings (Pennington et al., 2014).", "On the basis, we use a bidirectional recurrent neural network with Long-Short Term Memory (BiLSTM for short) (Liu et al., 2015) to encode each token, so as to obtain the initial hidden state vector h lstm i .", "Self-attention mechanism (Vaswani et al., 2017) is utilized for the resolution of long-distance dependency, by which we obtain the attention-weighted hidden state h att i .", "We concatenate h lstm i and h att i to produce the final feature vector for the i -th token: h i = h lstm i h att i .", "Conditioned on the feature vector h i emitted by BiLSTM with attention, we estimate the emission probabilities that the i -th token may serve as B , I and O respectively.", "The fully-connected dense layer is used to map h i to the BIO labeling space: p i ( BIO ) = f den ( h i ) .", "Over the emission probabilities of all the tokens in the source sentence, we utilized a linear-chain Conditional Random Field (CRF) (Wang et al., 2016) to predict the optimum label sequence of BIO.", "Eventually, the tokens labeled with B and I will be taken as the aspects.", "We train the extractor by maximizing the log-likelihood of sequence labeling (Luo et al., 2019): LE = NE (cid:88) i =1 log P ( y | f den ( h i ) , W , b ) (5) where, NE denotes the number of tokens in the training set, W is a trainable parameter which plays a role of transition matrix in CRF and b is the bias.", "The extractor can be trained on the benchmark datasets provided by the SemEval tasks (Pontiki et al., 2016).", "However, it is impractical to separately train the positioner because there is a lack of boundary-misspecified negative examples.", "To solve the problem, we recycle the negative examples occurring during the training of the extractor.", "We define a negative example to be a text span which partially overlaps with the ground-truth aspect.", "The text spans which are completely inconsistent with the ground-truth are not considered.", "For example, Fresh ingrediants in", "3) is an eligible negative example, but super tasty is ineligible.", "Fresh ingrediants and super tasty", "Ground-truth : ingrediants Eligible : Fresh ingrediants Ineligible : super tasty We maintain a table that maps each ground-truth aspect to a list of negative examples.", "We initialize the mapping table by taking ground-truth aspects as entries and assigning an empty list to each of them.", "For each entry, we traverse the results output by the extractor in each training epoch and pick up the eligible negative examples.", "The newly-observed negative examples will be added to the list of the entry only if they have not yet been included in the list.", "We perform recycling in the first 20 epochs.", "Few examples can be found in the subsequent epochs.", "We evaluate the proposed methods on the laptop and restaurant datasets provided by SemEval 2014-2016", "2014-2016 aspect-based sentiment analysis tasks (SE14-16 for short) (Pontiki et al., 2014, 2015, 2016).", "For comparison purpose, we follow the previous work to randomly select 20% of the official training data to form the validation set.", "Table 1 shows the sample statistics in the training, validation and test sets as well as that of the recycled negative examples (denoted by Neg).", "For the aspect pre-extraction model, we initialize all word embeddings by 100-dimensional GloVe word embeddings (Pennington et al., 2014).", "Each of BiLSTM units is of 100 dimensions and the number of hidden states in the self-attention layer is set to 200.", "We employ dropout on the output layer of BiLSTM (i.e., penultimate layer) and the dropout rate is set to 0.5.", "The learning rate for parameter updating is set to 1e-3.", "For the boundary reposition model, we employ basic BERT (Devlin et al., 2019) as the encoder which contains 12 transformer encoding blocks.", "Each block holds 768 hidden units and 12 self-attention heads.", "During training, the maximum length of the input sequence is set to 180 and the batch size is set to 10.", "The learning rate is set to 3e-5 and the number of training epochs is set to 5.", "We compare with the state-of-the-art models.", "By taking learning framework as the criterion, we divide the models into two classes: Single-task Learning In the family of aspect-oriented single-task learning, the traditional CRF 1 is used at the earliest time which is based on feature engineering.", "On the basis, HIS RD (Chernyshe-vich, 2014) additionally utilizes the part-of-speech and named entity features.", "NLANGP (Toh and Su, 2016) first incorporates syntactic features and word embeddings.", "HIS RD and NLANGP top 1 https://sklearn-crfsuite.readthedocs.io/en/latest/tutorial.html 0 .", "the list for aspect extraction in 2014 and 2016 SemEval challenges.", "During the period, WDEmb (Yin et al., 2016) enhances word embeddings using the linear context.", "And Liu et al. (2015)'s work may be the first attempt to directly use vanilla LSTM for aspect analysis.", "Soon afterwards, Xu et al. (2018) construct a multi-layer Convolution Neural Network ( DE-CNN ) which integrates GloVe and domain-specific embeddings.", "Ma et al. (2019) first use Sequence-to-Sequence learning ( Seq2Seq4ATE ) with GRUs and the position-aware attention mechanism this year.", "Multi-task Learning For aspect-oriented multitask learning, Li and Lam (2017) design a triple-LSTM model ( MIN ) to share the features which are generated toward extraction and classification tasks.", "CMLA (Wang et al., 2017) uses a multilayer attention mechanism for the joint extraction of aspect terms and sentiment words.", "HAST (Li et al., 2018) strengthens the joint model using truncated history-attention and selective transformation network.", "RINANTE (Dai and Song, 2019) shares features in the bottom layer of BiLSTM-CRF and uses distant supervision to expand the training data.", "Similar to RINANTE, our aspect pre-extraction model (Baseline) is based on BiLSTM-CRF.", "However, we force it to work in the single-task learning framework.", "More importantly, instead of distant supervision, we use recycling mechanism to acquire local boundary-misspecified examples, and instead of retraining BiLSTM-CRF for use, we only reposition the boundaries of the resultant aspects.", "We show the performance difference over test sets in Table 2.", "It can be observed that the single-task BiLSTM-CRF based extractor either achieves a comparable performance to some of the current state-of-the-art methods, or performs worse than Method SE14-L SE14-R SE15-R SE16-R CRF 72.77 79.72 62.67 66.96 HIS-RD (2014) 74.55 79.62 --LSTM (2015) 75.71 82.01 68.26 70.35 NLANGP (2016) -67.12 72.34 WDEmb (2016) 75.16 84.97 69.73 -DE-CNN (2018) 81.59 85.20 68.28 74.37 Seq2Seq (2019) 80.31 -75.14 MIN (2017) 77.58 -73.44 CMLA (2017) 77.80 85.29 70.73 HAST (2018) 79.52 85.61 71.46 73.61 RINANTE (2019) 73.47 84.06 66.17 -BiSELF-CRF (ours) 78.15 83.73 68.81 73.49 +Repositioning 81.90 86.58 71.72 75.56 Table 2: Performance (F-scores) comparison others.", "Nevertheless, refining the pre-extracted aspects by boundary repositioning yields substantial improvements and achieves the best performance.", "Figure 1 provides further insight into the test results.", "It shows that there are 41% of boundary-misspecified aspects in average can be successfully salvaged.", "On the contrary, there are only 1.7% of correctly-extracted aspects in average have been misjudged.", "Besides, there are few completely erroneous extraction results can be rectified.", "In a separate experiment, we examine the adaptation performance of boundary repositioning.", "The original pre-extraction model is replaced by the fine-tuning BERT and a more sophisticated model.", "The former is coupled with a dense layer and a softmax layer.", "The latter is constructed by coupling the fine-tuning BERT and the BiSELF-CRF network.", "On the contrary, the set of negative examples which are recycled in the earlier experiment remains unchanged.", "Table 3 shows the test results.", "It can be observed that boundary repositioning still achieves considerable improvements in performance.", "This demonstrates the robust adaptation ability.", "We tend to verify whether boundary repositioning can cooperate with the existing methods.", "Considering that DE-CNN (Xu et al., 2018) has a competitive advantage, we take it in this case study.", "We utilize DE-CNN for pre-extracting aspects and conduct boundary repositioning over the resultant aspects.", "The following notes needs to be considered if one tends to conduct a similar experiment.", "Both the source code of Xu et al (2018)'s DE-CNN and the preprocessed input data in SE14-L and SE16-R are publicly available.", "Conditioned on the input data, the retrained DE-CNN obtains similar performance to that reported in Xu et al (2018)'s study.", "Dai et al (2019) reported the performance of DE-CNN on SE14-R and SE15-R.", "However, it wasn't mentioned whether Xu et al (2018)'s open-source DE-CNN was used or it was reproduced.", "We retrained Xu et al (2018)'s open-source DE-CNN and preprocessed the input data in SE14-R and SE15-R all over again.", "The obtained performance on the datasets are worse than that reported in Dai et al (2019)'s work.", "Table 4 shows the performance of DE-CNN, including the reported performance in Xu et al (2018) and Dai et al (2019)'s work, that of the retrained DE-CNN, as well as the one coupled with boundary repositioning.", "It can be observed that boundary repositioning yields substantial improvements over the retrained DE-CNN on all the four datasets.", "Compared to the reported performance, the use of boundary repositioning also results in significant improvements on SE14-L, SE 15-R and SE16-R.", "We follow Johnson (1999) to use the sampling-based P-values for examining the significance.", "Johnson (1999) suggest that the ideal threshold of P-value is 0.05.", "It indicates that a system achieves significant improvements over others only if P-values are less than 0.05, otherwise insignificant.", "Besides, it has been proven that the smaller the P-value, the higher the significance (Dror et al., 2018).", "We form the updated versions of BiSELF-CRF and DE-CNN by coupling them with boundary repositioning.", "On the basis, we compute P-values by comparing the extraction results of the two models to that of the updated versions.", "Table 5 shows the P-values.", "It can be observed that the P-values are much lower than the threshold.", "This demonstrates that boundary repositioning produces significant improvements.", "In brief, we prove that boundary repositioning can be used as a reliable post-processing method for aspect extraction.", "The source code of boundary repositioning to reproduce the above experiments has been made publicly available.", "We submit the source code and instruction along with this paper.", "Our experimental results demonstrate that boundary repositioning can be used as a simple and robust post-processing method to improve aspect extraction.", "Our findings reveal that illustrative aspects in scientific literature are generally long-winded.", "Extracting these aspects suffers more severely from boundary errors.", "In the future, we will develop a syntax-based multi-scale graph convolutional network to deal with both short and long aspects.", "We thank the reviewers for their insightful comments.", "The idea is proposed by the corresponding author (Yu Hong).", "Yuchen Pan provides an effective assistance for conducting the experiments.", "We thank the colleagues for their help.", "This work was supported by the national Natural Science Foundation of China (NSFC) via Grant Nos. 61672368, 61672367, 61703293." ]
[ "abstain", "abstain", "abstain", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "objective", "objective", "objective", "method", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "result", "method", "result", "objective", "abstain", "abstain", "result", "result", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "objective", "other", "other", "other", "other", "other" ]
[ "Sequence-based neural networks show signifi-cant sensitivity to syntactic structure, but they still perform less well on syntactic tasks than tree-based networks.", "Such tree-based networks can be provided with a constituency parse, a dependency parse, or both.", "We evaluate which of these two representational schemes more effectively introduces biases for syntactic structure that increase performance on the subject-verb agreement prediction task.", "We find that a constituency-based network generalizes more robustly than a dependency-based one, and that combining the two types of structure does not yield further improvement.", "Finally, we show that the syntactic robustness of sequential models can be substantially improved by fine-tuning on a small amount of constructed data, suggesting that data augmentation is a viable alternative to explicit constituency structure for imparting the syntactic biases that sequential models are lacking.", "Natural language syntax is structured hierarchically, rather than sequentially (Chomsky, 1957; Everaert et al., 2015).", "One phenomenon that illustrates this fact is English subject-verb agreement , the requirement that verbs and their subjects must match in number.", "The hierarchical structure of a sentence determines which noun phrase each verb must agree with; sequential heuristics such as agreeing with the most recent noun may succeed on simple sentences such as (1a) but fail in more complex cases such as (1b): (1)", "rules governing these dependencies, or whether there is sufficient signal in natural language corpora for low-bias networks (such as sequential LSTMs) to learn these structures.", "We compare sequential LSTMs, which process sentences from left to right, with tree-based LSTMs that process sentences in accordance with an externally-provided, ground-truth syntactic structure.", "We consider two types of syntactic structure: constituency structure (Chomsky, 1993; Pollard and Sag, 1994) and dependency structure (Tes-niere, 1959; Hudson, 1984).", "We investigate models provided with either structure, both structures, or neither structure (see Table 1), and assess how robustly these models learn subject-verb agreement when trained on natural language.", "1 Even with the syntactic biases present in tree-based LSTMs, it is possible that natural language might not impart a strong enough signal to teach a network how to robustly track subject-verb dependencies.", "How might the performance of these tree-based LSTMs change if they were fine-tuned on a small dataset designed to impart a stronger syntactic signal?", "Furthermore, would we still need these tree structures, or could a sequential LSTM now learn to track syntactic dependencies?", "We find that building in either type of syntactic structure improves performance over the BiLSTM 1 Code, data, and models are at https://github.", "com/mlepori1/Representations_Of_Syntax baseline, thus showing that these structures are learned imperfectly (at best) by low-bias models from natural language data.", "Of the two types of structure, constituency structure turns out to be more useful.", "The dependency-only model performs well on natural language test sets, but fails to generalize to an artificially-constructed challenge set.", "After fine-tuning on a small dataset that is designed to impart a strong syntactic signal, the BiLSTM generalizes more robustly, but still falls short of the tree-based LSTMs.", "We conclude that for a network to robustly show sensitivity to syntactic structure, stronger biases for syntactic structure need to be introduced than are present in a low-bias learner such as a BiLSTM, and that, at least for the subject-verb agreement task, constituency structure is more important than dependency structure.", "Both tree-based model structure and data augmentation appear to be viable approaches for imparting these biases.", "Prior work has shown that neural networks without explicit mechanisms for representing syntactic structure can show considerable sensitivity to syntactic dependencies (Goldberg, 2019; Gulordava et al., 2018; Linzen et al., 2016), and that certain aspects of the structure of the sentence can be reconstructed from their internal representations (Lin et al., 2019; Giulianelli et al., 2018; Hewitt and Manning, 2019).", "Marvin and Linzen (2018) showed that sequential models still have substantial room for improvement in capturing syntax, and other work has shown that models with a greater degree of syntactic structure outperform sequential models on syntax-sensitive tasks (Yogatama et al., 2018; Kuncoro et al., 2018, 2017), including some of the tree-based models used here (Bowman et al. , 2015; Li et al., 2015).", "One contribution of the present work is to tease apart the two major types of syntactic structure to see which one imparts more effective syntactic biases.", "As our baseline model, we used a simple extension to the LSTM architecture (Hochreiter and Schmid-huber, 1997), the bidirectional LSTM (BiLSTM; Schuster and Paliwal, 1997).", "This model runs one LSTM from left to right over a sequence, and another from right to left, without appealing to tree structure.", "Bidirectional LSTMs outperform unidirectional LSTMs on a variety of tasks (Huang et al., 2015; Chiu and Nichols, 2016), including syntax-sensitive tasks (Kiperwasser and Goldberg, 2016).", "Ravfogel et al. (2019) also employs BiLSTMs for a similar agreement task.", "To study the effects of explicitly building tree structure into the model architecture, we used the Constituency LSTM and the Dependency LSTM (Tai et al., 2015), which are types of recursive neural networks (Goller and Kuchler, 1996).", "The Constituency LSTM operates in accordance with a binary constituency parse, composing together vectors representing a left child and a right child into a vector representing their parent.", "Models similar to the Constituency LSTM have been proposed by Le and Zuidema (2015) and Zhu et al. (2015).", "In a Dependency LSTM, the representations of a head's children are summed, and then composed with the representation of the head itself to yield a representation of the phrase that has that head.", "See Appendix A for more details on both models.", "To create a model where composition is simultaneously guided by both a dependency parse and a constituency parse, we modified the constituency model described in Section 3.2, turning it into a head-lexicalized tree LSTM .", "In a standard Constituency LSTM, the input for all non-leaf nodes is a vector of all 0's.", "To add head lexicalization, we instead feed in the word embedding of the correct headword of that constituent as the input, where the choice of headword is determined using the Stanford Dependency Parser (Manning et al., 2014).", "See Appendix B for more details, as well as an example of a head-lexicalized constituency tree.", "This model is similar to the head-lexicalized tree LSTM of Teng and Zhang (2017).", "However, their model learns how to select the heads of constituents in an unsupervised manner; these heads may not correspond to the syntactic notion of heads.", "Because we seek to understand the effect of using the heads derived from the dependency parse, we provide our models with explicit head information.", "We adapted a syntax-sensitive task that previous work has used to assess the syntactic capabilities", "of LSTMsthe number prediction task (Linzen et al., 2016).", "The most standard version of this task is based on a left-to-right language modeling objective; however, tree-based models are not compatible with left-to-right language modeling.", "Therefore, we made two modifications to this objective, both of which have precedents in the literature: First, we gave the model an entire present-tense sentence with main verb masked out, following Goldberg (2019).", "Second, the model's target output was the number of the masked verb: SINGULAR or PLURAL ; we follow Linzen et al. (2016) and Ravfogel et al. (2019) in framing number prediction as a classification task.", "To solve the task, the model must identify the subject whose head is the main verb (in the dependency formalism), and use that information to determine the syntactic number of the verb; e.g., for (2), the answer is SINGULAR .", "Linzen et al. (2016) pointed out that there are several incorrect heuristics which models might adopt for this task because these heuristics still produce decent classification accuracy.", "One salient example is picking the syntactic number of the most recent noun to the left of the verb.", "We hypothesize that tree-based models will be less susceptible to these non-robust heuristics than sequential models.", "Data: We train our models on a subset of the dataset from Linzen et al. (2016) that is chosen to have a uniform label distribution (50% SINGULAR and 50% PLURAL ).", "We made this choice because our task format differs from that used in some past work (see Section 4), so performance on the task as we have framed it cannot be directly compared to prior work.", "In the absence of baselines from the literature, we use chance performance of 50% as a baseline; to ensure that this baseline is reasonable, we balance the label distribution during training to discourage models from becoming biased toward one label.", "We use two types of test sets: those that contain adversarial attractors, and those that do not.", "An adversarial attractor is a noun that is between the subject and the main verb of a sentence and that has the opposite syntactic number from the subject noun.", "Adversarial attractors have been found to produce agreement errors in humans (Bock and Miller, 1991) and neural models (Goldberg, 2019; 0% 25% 50% 75% 100% 0 1 2 3 4 Number of attractors A cc u r a cy : na t u r a l e v a l ua t i on s e t BiLSTM Dependency Constituency Head 0% 25% 50% 75% 100% B i LSTMD ependen cy C on s t i t uen cy H ead A cc u r a cy : c on s t r u c t ed e v a l ua t i on s e t", "(b) Results for models trained on natural language and then exposed to a 500-sentence augmentation set.", "Gulordava et al., 2018; Linzen et al., 2016).", "We use code from Goldberg (2019) 2 to extract adversarial datasets containing varying numbers of attractors, from 0 to 4 attractors.", "Sentence (3) provides an example of a sentence with 4 attractors.", "Natural language evaluation: All of the tree-based models outperformed the BiLSTM in the presence of attractors (Figure 1a).", "Compared to prior work with the number prediction task, our BiLSTM performed very poorly on the 4 Attractors dataset.", "However, our results cannot be directly compared to previous work because of the modifications we have made to the task, data, and training procedure in order to accommodate tree-2 https://github.com/yoavg/bert-syntax based models.", "In light of these modifications, there are several reasons why the BiLSTM's low accuracy is unsurprising.", "First, we used a balanced label distribution during training.", "In the standard dataset from Linzen et al. (2016), the class labels are not balanced, so models evaluated on that dataset might outperform our BiLSTM by exploiting the biased label distributiona heuristic that our balanced training set discourages.", "Another potential cause for the BiLSTM's poor performance is that, in order to balance the label frequencies, we used a smaller training set than was used in past work (81,000 sentences instead of 121,000 sentences).", "Finally, it is possible that allowing models to see the entire sentence may allow them to acquire non-robust heuristics related to the words following the main verb.", "For example, a model might learn spurious correlation between the syntactic number of subjects and their direct objects.", "See Appendix E, Table 2 for results on all test sets.", "Constructed sentence evaluation: With naturally occurring sentences, it is possible that models perform well not because they have mastered syntax, but rather because of statistical regularities in the data.", "For example, given The players *MASK* the ball , the model may be able to exploit the fact that animate nouns tend to be subjects while inanimate nouns do not.", "As pointed out by Gulordava et al. (2018), this would allow the model to correctly predict syntactic number, but for the wrong reasons.", "To test whether our models were leveraging this statistical heuristic, we constructed a 400-sentence test set where this heuristic cannot succeed.", "We did so using a probabilistic context-free grammar (PCFG) under which all words of a given part of speech are equally likely in all positions; each sentence from this grammar is of the form Subject-Verb-Object, and all noun phrases can optionally be modified by adjectives and/or prepositional phrases (see Appendix F), as in (4): (4) The fern near the sad teachers hates the singer.", "The Dependency LSTM is especially likely to fall prey to word cooccurrence heuristics, as it lacks the ability for a parent to account for the sequential position of its children.", "This can be an issue when determining whether a verb is supposed to be singular or plural, because the model has no robust way to distinguish a verb's subject from its direct object.", "The dependency model did indeed perform at chance (See the bar graph in Figure 1a).", "3 This suggests that the dependency model's high accuracy is partially due to lexical heuristics rather than syntactic processing.", "In contrast, the other models performed well, suggesting that they are less susceptible to relying on word cooccurrence.", "In Experiment 1, tree-based models dramatically outperformed the BiLSTM in the presence of attractors.", "This difference may have arisen because most natural language sentences are simple, and thus they do not generate enough signal to illustrate the importance of tree structure to a low-bias learner, such as a BiLSTM.", "Recent work has shown the effectiveness of syntactically-motivated fine-tuning at increasing the robustness of neural models (Min et al., 2020).", "Would our models generalize more robustly if we added a few training examples that do not lend themselves to non-syntactic heuristics?", "To provide the model with a stronger signal about the importance of syntactic structure, we fine-tuned our models on a dataset designed to impart this signal.", "We used a variant of the PCFG (see Appendix F) from Section 5 to generate a 500-sentence augmentation set .", "This augmentation set cannot be solved using word cooccurrence statistics, and contains some sentences with attractors.", "The models were then fine-tuned on the augmentation set for just one epoch over the 500 examples.", "See Appendix C.2 for training details.", "Results: The head-lexicalized model and the BiLSTM benefited most from fine-tuning, with the head-lexicalized model now matching the performance of the Constituency LSTM, and the BiLSTM showing dramatic improvement on sentences with multiple attractors (Figure 1b; see Appendix E, Table 3 for detailed results).", "While the BiLSTM's accuracy increased on sentences with attractors, it decreased on the No Attractors test set.", "We suspect that this is because augmentation discouraged the model from using heuristics: while this makes performance more robust overall, it may hurt accuracy on simple examples where the heuristics give the correct answer (Min et al., 2020).", "As expected from its architectural limitations, the Dependency LSTM did not noticeably benefit from fine-tuning 3 Most sentences in the test set have only two nouns.", "50% of the time, they will agree in number, and the syntactic number is unambiguous.", "Random guessing on the other 50% of cases would yield about 75% accuracy.", "Overall, we found that neural models trained on natural language achieve much more robust performance on syntactic tasks when syntax is explicitly built into the model.", "This suggests that the information we provided to our tree-based models is unlikely to be learned from natural language by models with only general inductive biases.", "In Experiment 1, the network provided with a dependency parse did the best on most of the natural language test sets.", "This is unsurprising, as the task is largely about a particular dependency (i.e., the dependency between a verb and its subject).", "At the same time, as demonstrated by the constructed sentence test, the syntactic capabilities of the Dependency LSTM are inherently limited.", "Thus, it must default to non-robust heuristics in cases where the unlabeled dependency information is ambiguous.", "In future work, these syntactic limitations may be overcome by giving the model typed dependencies (which would distinguish between a subject-verb dependency and a verb-object dependency).", "One might expect the head-lexicalized model to perform the best, since it can leverage both syntactic formalisms.", "However, it performs no better than the constituency model when trained on natural language, suggesting that there is little benefit to incorporating dependency structure into a Constituency LSTM.", "In some cases, the head-lexicalized model without fine-tuning even performs worse than the Constituency LSTM.", "When fine-tuned on more challenging constructed examples, the head-lexicalized model performed similarly to the Constituency LSTM, suggesting that there is not enough signal in the natural language training set to teach this model what to do with the heads it has been given.", "Our results point to two possible approaches for improving how models handle syntax.", "The first approach is to use models that have explicit mechanisms for representing syntactic structure.", "In particular, our results suggest that the most important aspect of syntactic structure to include is constituency 4 Note that the constructed test set used here is controlled to have no overlap with the augmentation set.", "Thus, it is not exactly the same as the set used in Section 5, but both corpora are generated from the same CFG.", "structure, as constituency models appear to implicitly learn dependency structure as well.", "Though the models we used require parse trees to be provided, it is possible that models can learn to induce tree structure in an unsupervised or weakly-supervised manner (Bowman et al., 2016; Choi et al., 2018; Shen et al., 2019).", "Another effective approach for improving the syntactic robustness of neural models is data augmentation, as demonstrated in Experiment 2.", "With this approach, it is possible to bring the syntactic performance of less-structured models closer to that of models with explicit tree structure, even with an augmentation set generated simply and easily using a PCFG.", "Future work should further explore both of these approaches.", "Our conclusions about the importance of explicit mechanisms for representing syntactic structure can be strengthened by developing different formulations of the tree LSTMs.", "It seems particularly promising to explore alternative formulations of the Dependency LSTM (as mentioned above) and the effect of learning embeddings of non-terminal symbols for the Constituency LSTM.", "Finally, future work should investigate whether data augmentation can fully bridge the gap between low-bias learners and structured tree LSTMs, and whether our conclusions apply to other syntactic phenomena besides agreement.", "This research was supported by a Google Faculty Award to Tal Linzen, NSF Graduate Research Fellowship No. 1746891, and NSF Grant No.", "BCS-1920924.", "Our experiments were conducted using the Maryland Advanced Research Computing Center (MARCC)." ]
[ "abstain", "abstain", "method", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other" ]
[ "Finetuning deep pre-trained language models has shown state-of-the-art performances on a wide range of Natural Language Processing (NLP) applications.", "Nevertheless, their generalization performance drops under domain shift.", "In the case of Arabic language, diglossia makes building and annotating corpora for each dialect and/or domain a more challenging task.", "Unsupervised Domain Adaptation tackles this issue by transferring the learned knowledge from labeled source domain data to unlabeled target domain data.", "In this paper, we propose a new unsupervised domain adaptation method for Arabic cross-domain and cross-dialect sentiment analysis from Contextualized Word Embedding.", "Several experiments are performed adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects.", "The obtained results show that our method yields very promising results and outperforms several domain adaptation methods for most of the evaluated datasets.", "On average, our method increases the performance by an improvement rate of 20.8% over the zero-shot transfer learning from BERT.", "The Arabic language is characterized by two main language varieties: Modern Standard Arabic (MSA) and Arabic dialect.", "MSA has a standard written form and acquires an official status across the Arab countries, while Dialectal Arabic refers to the informal spoken dialects in the Arab World (Habash, 2010).", "These dialects are used in daily life but have no standard written form (Saadane and Habash, 2015; Habash et al., 2018; Eryani et al., 2020).", "Geographically and according to (Zaidan and Callison-Burch, 2014), Arabic dialects can be classified into five coarse-grained regional dialects: Egyptian, Levantine, Gulf, Iraqi, and Maghrebi.", "Recent studies have categorized dialectal Arabic into more fine-grained levels, including countries and cities (Bouamor et al., 2019; Muhammad et al., 2020).", "These dialects differ from one another and from MSA, to a varying degree, at different linguistic levels (Salameh et al., 2018; Erdmann et al., 2018).", "With the unprecedented reach of social media platforms, Sentiment Analysis (SA) has become a fundamental task for many applications.", "Most research work in this area has been devoted to English and other European languages, while some research studies have addressed the question of transfer learning from MSA to dialectal Arabic.", "However, Khaddaj et al. (2019) and Qwaider et al. (2019) have shown that zero-shot transfer learning, from models trained on MSA data, does not perform well for SA on dialectal Arabic data.", "So, existing works have focused on building resources and annotating corpora for a few dialects where most of them were collected from social media (Med-haffar et al., 2017; Al-Twairesh et al., 2017; Baly et al., 2018; Moudjari et al., 2020; Oueslati et al., 2020).", "Nevertheless, dealing with Arabic dialects as standalone languages is challenging since building manually such resources is costly and time-consuming.", "It is well known that the generalization performance of Machine Learning (ML) models drops in the case of domain shift (out of distribution data).", "Hence, there is an imperative need to leverage existing labeled data from other related domains, in order to address this challenge.", "The aim is to accurately transfer the learned knowledge from a source domain labeled data to a new target domain data.", "On the one hand, adaptive pretraining of contextualized word embedding models has shown an effective transfer learning performance under domain shift (Han and Eisenstein, 2019; Rietzler et al., 2020).", "It consists of finetuning the pre-trained language models on large unlabeled corpus from the target domain using the MLM objective.", "On the other hand, self-training and domain-adversarial learning have been applied successfully to many NLP applications (Li et al., 2020; Ramponi and Plank, 2020; Ye et al., 2020; Ganin et al., 2016).", "An effective method that combines domain-adversarial training and self-training is the Adversarial-Learned Loss for Domain Adaptation (ALDA) (Chen et al., 2020).", "The domain-adversarial training aligns both domains' output distributions, while self-training captures the discriminative features of the target domain data.", "In this paper, we introduce a new unsupervised domain adaptation method for Arabic cross-domain and cross-dialect sentiment analysis based on AraBERT language model (Antoun et al., 2020) and the Adversarial-Learned Loss for Domain Adaptation (ALDA) (Chen et al., 2020).", "Due to limited amount of unlabeled data for most target domains-dialects, we do not rely on the adaptive pre-training of AraBERT model.", "Our method leverages the potentials of:", "i) contextualized word embeddings to learn high-level text representation,", "ii) adversarial domain training to match the output distributions of domains and dialects, and", "iii) self-training to capture the discriminative features of the target domain data.", "The proposition of a new unsupervised domain adaptation method for Arabic SA.", "The study of three possible challenging scenarios of domain adaptation for Arabic SA.", "The achievement of very promising results on several Arabic cross-domain and cross-dialect sentiment classification datasets.", "To the best of our knowledge, this is the first study that investigates domain adaptation for cross-domain, cross-dialect and cross-domain & cross-dialect sentiment analysis, adopting the coarse-grained and the fine-grained taxonomies of Arabic dialects.", "The proposed method outperforms several state-of-the-art methods on most test datasets.", "The rest of this paper is organized as follows.", "Section 2 presents related work.", "In Section 3, we introduce our method.", "Section 4 illustrates the conducted experiments, and discusses the obtained results.", "Finally, in Section 6, we conclude the paper and outline a few directions for future work.", "sentiment analysis (Badaro et al., 2019; Al-Ayyoub et al., 2019).", "This has been achieved by publishing datasets (Elnagar et al., 2018; Ashraf and Omar, 2016; Aly and Atiya, 2013; ElSahar and El-Beltagy, 2015; Nabil et al., 2015), sentiment lexicons (Badaro et al., 2014; El-Beltagy, 2016; Gilbert Badaro and Habash, 2018), and proposing models as well as architectures that reach decent accuracy scores (Al Sallab et al., 2015; Antoun et al., 2020; Abdul-Mageed et al., 2020).", "As an example, the pre-trained language model AraBERT (An-toun et al., 2020) has achieved state-of-the-art performance on Arabic sentiment classification tasks.", "Nevertheless, most of these achievements are still limited to the MSA, and to some Arabic dialects and domains (Badaro et al., 2019; Al-Ayyoub et al., 2019).", "Unsupervised domain adaptation.", "In the past few years, there has been considerable interest in unsupervised domain adaptation for cross-domain NLP tasks, including cross-domain sentiment analysis (Ramponi and Plank, 2020).", "Previous work has focused on minimizing the discrepancy between domains by aligning the output distributions of the source and the target domains.", "Maximum Mean Discrepancy (MMD) (Gretton et al., 2012), KL-divergence (Zhuang et al., 2015), Correlation Alignment (CORAL) (Sun and Saenko, 2016), and domain-adversarial learning (Ganin et al., 2016) are among the most widely used methods to learn domain-invariant features.", "In the same vein, other researchers have adopted self-training approach in order to learn discriminative features of the target domain (Ramponi and Plank, 2020; Ye et al., 2020).", "The latter approach enables the model to be also trained on some samples of the target domain.", "The main idea is to select a subset of pseudo-labels, predicted on the target domain inputs, for which the model's confidence is higher than a fixed threshold, and to incorporate them into the model loss.", "However, pseudo-labels are generally noisy and may hurt the performance of the model.", "Chen et al. (2020) have tackled this issue by introducing the adversarial-learned loss for domain adaptation where the discriminator corrects the noise in the pseudo-labels by generating noise vectors that are specific for each domain.", "Domain adaptation for cross-domain sentiment analysis.", "In order to learn cross-domain text representation, several domain adaptation methods have relied on pivot features extraction.", "Inspired from structural correspondence learning, Yu and Jiang (2016) have proposed a method to learn continuous sentence embedding employing CNN model across various domains.", "Li et al. (2018) have introduced a domain adaption method which can be extended to documents.", "The latter method uses a hierarchical attention transfer network for extracting the pivots and non-pivots features between source and target domains.", "Ziser and Reichart (2018) have proposed language modeling objective to learn a model scratch rather than adapting a pre-trained embedding model.", "Recently, several methods have been introduced for domain adaptation based on adaptive pretraining of contextualized word embeddings (Han and Eisenstein, 2019; Li et al., 2020; Vu et al., 2020).", "The latter approach relies on the availability of a large amount of unlabeled data in the target domain to finetune/adapt the existing pre-trained language model using the MLM objective.", "Rietzler et al. (2020) have proposed an unsupervised domain adaptation method for aspect-target sentiment classification based on BERT adaptive pretraining.", "Vu et al. (2020) have presented an adaptive pre-training method that adversarially masks out tokens that are hard to be reconstructed by the MLM.", "In another work, (Du et al., 2020) have proposed to combine BERT domain-aware training and adversarial-domain learning (Ganin et al., 2016) for cross-domain sentiment analysis.", "The domain-aware training combines the adaptive pretraining using the MLM objective and a Domain Distinguish Task (DDT).", "For cross-domain and cross-lingual domain adaptation, Li et al. (2020) have introduced an unsupervised feature decomposition method based on the mutual information to extract domain-invariant and domain-specific features using the XLM language model (Lample and Conneau, 2019).", "For the Arabic language, Khaddaj et al. (2019) have introduced a domain adaptation method for cross-domain and cross-dialect sentiment analysis, combining domain adversarial training (Ganin et al., 2016) with denoising autoencoder for representation learning.", "The input sentences of both domains are represented using the bag-of-words representation by selecting the top 5,000 most frequent unigrams and bigrams.", "The obtained results on the Levantine multi-topic ArSentD-LEV dataset (Baly et al., 2018) show that combining the reconstruc-tion loss with the adversarial training has slightly improved the performance in some cases.", "Nevertheless, the overall obtained results show that the zero-shot transfer from the SVM model achieves competitive results for some datasets.", "In another work, Qwaider et al. (2019) have shown that models that are trained on MSA for the task of sentiment classification generalize poorly to dialectal Arabic data.", "For improving the results, they have performed domain adaptation using feature engineering and sentiment lexicons.", "In this section, we present our model architecture.", "The noise-correcting discriminator, the classifier and the generator losses, employed in our model, are those of ALDA model (Chen et al., 2020).", "In unsupervised domain adaptation settings, for sentiment analysis, we are given a labeled source domain DS = { ( x is , y is ) } n s i =1 having K classes and an unlabeled target domain DT = { x it } n t i =1 .", "The aim is then to transfer the learned knowledge from DS to DT .", "In other words, the objective is to train a robust classifier using the labeled source domain data that generalizes well on the target domain test data.", "Figure 1 presents the general framework of our method.", "We aim to leverage the strength of both the domain adversarial training and the self-training in a unified framework.", "The adversarial training aligns both domains' output distributions, whereas the self-training considers the discriminative features of the target domain.", "Besides, AraBERT is used as a generator to extract high-level representation from both source and target domains sentences.", "The generator G , the AraBERT encoder, is trained to extract features from the input sentences for both domains: h [ CLS ] = G ( x ) corresponds to the hidden state of the [ CLS ] token.", "The weights of the generator are shared between both domain inputs.", "The classifier C operates on the hidden states h [ CLS ] to classify the input instances x and outputs a probability vector p ( y = k | x ) = Softmax ( W c h [ CLS ] + b c ) for both domains ( p s and p t ), where b c and W c are the bias vector and the weight matrix on the classification layer, respectively.", "discrimina-Figure 1: The general framework of our method", "tor D by maximizing its loss.", "Thus, the generator aligns both domains' output distributions, whereas the discriminator must distinguish both domain features by generating different noise vectors for each domain.", "These noise vectors are employed to correct the pseudo-labels predicted by the classifier C .", "The Gradient Reversal Layer (GRL) reverses the gradient of the discriminator's loss during the back-propagation step.", "The input of the discriminator D corresponds to the hidden state h [ CLS ] of the generator G .", "D is trained to produce a noise vector ( x ) = ( D ( h [ CLS ] )) by applying , the Sigmoid activation, on its output layer.", "Note that, the output layer size is equal to K , the number of classes.", "Each component of the noise vector estimates the probability that the predicted label is the correct label ( x ) k = p ( y = k | y = k, x ) .", "Hence, instead of being trained to classify the source domain sentences and the ones of the target domain, G is trained to generate different noise vectors for each domain.", "The noise vector is used to estimate the confusion matrix = ( kl ) , which is applied to correct the target domain's pseudo-labels, predicted by the classifier C .", "The intuition behind the ALDA model is that, if we appropriately estimate the confusion matrix, the noise in the pseudo-labels predicted by the classifier, can be efficiently corrected (Chen et al., 2020).", "Assuming that the noise in pseudo-labels is classwise uniform with vector ( x t ) k , the confusion matrix is then given by: x t kl = ( x t ) k if k = l 1 ( xt ) l K 1 else The corrected label vector in the target domain is given by c ( x t ) = (cid:80) l x t kl p ( y t = l | x t ) ( l is the predicted pseudo label).", "For the source domain, the corrected label vector c ( x s ) is computed using the same procedure.", "For the source domain, the discriminator minimizes the binary cross-entropy loss L bce between the corrected label vectors and the ground truth labels y s : L adv ( x s , y s ) = L bce ( c ( x s ) , y s ) (1) For the target domain, the discriminator minimizes the binary cross-entropy loss L bce between the corrected label vector and the opposite distribution of the predicted pseudo-label u ( y t ) : L adv ( x t ) = L bce ( c ( x t ) , u ( y t ) ) (2) where u ( y t ) is computed as follows: u ( y t ) k = (cid:40) 0 if y t = k 1 K 1 else To discriminate between both domains, the discriminator minimizes the following total adversarial loss: L adv ( x s , y s , x t ) = L adv ( x s , y s ) + L adv ( x t ) (3) In order to make the training more stable, ALDA incorporates the classification loss of the source domain as a regularization term into the discriminator.", "Thus, the discriminator must also correctly classify the source domain data.", "The regularization term is given by: L reg ( x s , y s ) = L ce ( p ( x s ) d , y s ) (4) where p ( x s ) d = Softmax ( D ( h ( x s ) [ CLS ] )) and L ce is the cross-entropy loss.", "Finally, the discriminator minimizes the following loss function: LD = L adv ( x s , y s , x t ) + L reg ( x s , y s ) (5) 3.3 Classifier and generator losses Following the principles of pseudo-labeling methods for domain adaptation, the ground truth label y t for the target domain can be substituted by: y t = argmax k p kt if p kt > where is a threshold.", "By using the learned confusion matrix ( x t ) to correct the pseudo-label generated by the classifier C , ALDA approximates the loss in the target domain by: LT ( x t , L unh ) = (cid:88) k c ( x t ) k L unh ( p t , k ) (6) where L unh ( p, k ) = 1 p k is the unhinged loss.", "Then, the classifier C minimizes the following loss: LC = L ce ( p s , y s ) + LT ( x t , L unh ) (7) where L ce ( p s , y s ) is the cross-entropy loss of the source domain.", "Finally, the generator G minimizes the following loss function: LG = L ce ( p s , y s )+ LT ( x t , L unh ) L adv ( x s , y s , x t ) (8) where [0 , 1] is a hyperparameter.", "In this section, we present the experiments carried out to investigate the performance of our proposed method for Arabic cross-domain and cross-dialect sentiment analysis.", "We describe the used datasets and present the compared methods as well as the obtained results.", "We provide the experiments settings and implementation details of our method in Section A. The source code for reproducing the experimentations can be found in our github repository 1 .", "Scenario 1: Domain adaptation for dialects of the same region.", "The set of experiments of this scenario aims to study our method's performance for cross-dialect and cross-domain sentiment analysis for Arabic dialects of the same region.", "To do so, we employ the existing multi-domain multi-dialect ArSentD-LEV dataset of the Levant region (Baly et al., 2018).", "ArSentD-LEV contains 1,000 tweets from each country of the Levant region (4,000 in total): Syria, Palestine, Jordan, and Lebanon.", "It is labeled into five classes and covers tweets from five topics: Personal (36%), Politics (23%), Religion (11%), Sport (6%), and Other (24%).", "Scenario 2: Domain adaptation across regional dialects.", "In the set of experiments of this scenario, we investigate the performance of our method using the coarse-grained regional taxonomy of Arabic dialects.", "For this purpose, 1 https://github.com/4mekki4/arabic-nlp-da", "1. Firstly, we select three datasets, mixing Arabic dialects and MSA: BRAD (Elnagar and Einea), HARD (Elnagar et al., 2018), and TEAD (Abdellaoui and Zrigui, 2018) that are compiled from book reviews, hotel reviews, and Twitter, respectively.", "These datasets have sufficient samples to build a multi-dialect multi-domain dataset.", "2. Secondly, we train an AraBERT-based dialect identification model, selecting data from some of the publicly available datasets, including MADAR (Bouamor et al., 2019), DART (Alsarsour et al., 2018), AOC (Zaidan and Callison-Burch, 2011), PADIC (Karima et al., 2018), and the multi dialect Arabic texts corpora proposed in (Khalid and Mark, 2013).", "The resulting Arabic dialect identification corpus consists of 353 , 171 training sentences and a balanced test set of 50 , 000 sentences and covers MSA as well as dialectal sentences from Maghrebi, Levantine, Egyptian, and Gulf.", "It is worth mentioning that our trained dialect identification model achieves 89% accuracy.", "3. Finally, we apply our dialect identification model on the three evaluated datasets to build our multi-dialect multi-domain corpus.", "Moreover, we select the Levantine and Gulf dialects and MSA, which yielded sufficient data across domains.", "For review datasets, the rating levels 1 and 2 are assigned to negative polarity, while ratings 4 and 5 are considered positives.", "Furthermore, we sample 1000 positive and 1000 negative instances for each dialect to build our final multi-dialect and multi-domain dataset.", "Scenario 3: Domain adaptation from MSA to Arabic dialects using social media data.", "The set of experiments of this scenario tackles the transfer of learning from MSA to Arabic dialects, belonging to different regions, using corpora built from social media (see Table 7 of Section 1.5 for the datasets details).", "Thus,", "1. For MSA, we use the ArSAS dataset (El-madany et al., 2018).", "2. For the Maghrebi region, we employ MSAC (Morocco) (Oussous et al., 2020) and TSAC (Tunisia) (Medhaffar et al., 2017) datasets.", "Note that, we have removed sentences that are written in Arabezi for TSAC.", "3. For the Egyptian region, we use the ASTD dataset (Nabil et al., 2015).", "4. For the Levant region, we utilize AJGT (Jor-dan) (Alomari et al., 2017) and TweetSYR (Syria) (Salameh et al., 2015) datasets.", "5. For the Gulf region, we employ the Saoudi dialect AraSenti-Tweet dataset (Al-Twairesh et al., 2017).", "Since some of these datasets are labeled using positive and negative classes only (TSAC and MSAC), we evaluate our method using positive and negative sentences for all the used datasets.", "We use the train-test splits of the evaluated datasets whenever this information is available.", "Otherwise, we split the datasets into 80% train and 20% test.", "For the ArSentD-LEV and following the work of (Khaddaj et al., 2019), we evaluate our method on the full target domain/dialect dataset.", "For all our experiments, we utilize the accuracy evaluation measure and highlight the best accuracy performance using bold font.", "In order to assess the performance of our method, we compare it with the state-of-the-art domain adaptation method, introduced by (Khaddaj et al., 2019), for Arabic sentiment analysis on the ArSentD-LEV dataset.", "Moreover, we evaluate BERT for zero-shot transfer from the source domain, denoted ZS-BERT .", "For a fair comparison, we investigate the performance of three state-of-the-art domain adaptation methods including MMD (Gretton et al., 2012), CORAL (Sun and Saenko, 2016), and DANN (Ganin et al., 2016).", "We implement the latter methods on top of AraBERT.", "We have also evaluated two state-of-the-art cross-domain sentiment analysis methods, namely PBLM (Ziser and Reichart, 2018) and HTAN (Li et al., 2018).", "It is worth to mention that for PBLM and HATN, we have used an extra 4000 unlabeled sentences from each domain/dialect.", "For HTAN, we have used Mazjak word embedding model (Abu Farha and Magdy, 2019) 4.3 Results Scenario 1: Domain adaptation for dialects of the same region.", "Tables 1 and 2 present the results obtained for Arabic cross-domain and cross-dialect sentiment Analysis using ArSentD-LEV.", "zero-shot transfer-based method, outperforms both DANNBOW and ADRL, the state-of-the-art domain adaptation methods that are based on the bag-of-words representation.", "Moreover,training the state-of-the-art domain adaptation methods, including CORAL, MMD and DANN, on top of BERT module has improved BERT transfer performance across dialects.", "Besides, these three methods achieve comparable performance for most source and target dialects and outperform both DANNBOW and ADRL.", "Furthermore, our method, which is based on BERT and ALDA's losses, surpasses the existing state-of-the-art methods and ZS-BERT with average improvements of 19% and 5.5% respectively.", "Additionally, it shows better performance than the other domain adaptation methods that are implemented on top of BERT (CORAL, MMD, and DANN).", "In accordance with the results obtained for cross-dialect, Table 2 shows that the ZS-BERT method outperforms both DANNBOW and ADRL in most test cases of cross-domain sentiment analysis (14 out of 20 cases).", "Besides, the results show that the three domain adaptation methods CORAL, MMD, and DANN outperform both DANNBOW and ADRL, and improve the transfer performance of BERT model.", "On average, the latter three methods (CORAL, MMD, and DANN) are on a par with each other in terms of accuracy.", "Similarly, our proposed method outperforms both state-of-the-art methods (DANNBOW and ADRL) as well as ZS-BERT by an average increment of 19% and 10.7%, respectively.", "Moreover, it achieves a better performance than CORAL, MMD, and DANN for most source and target SOTA Our Results Source Target DANNBOWADRL ZS-BERT CORAL MMD DANN Ours Politics Personal 28.7 33.3 28.7 41.6 41.3 43 44.3 Religious 20.3 25.3 10 33.6 33.3 34.2 46.3 Sport 35.1 35.1 36.7 46.6 32.8 46.8 46.8 Other 22.5 24.2 38.2 49.7 50 39.7 46.1 Personal Politics 41.7 36.8 46.3 49.7 49.4 47.5 49.7 Religious 22.8 23.4 41 44.3 44.7 43.5 44.2 sport 26.8 25.8 43.5 49.7 49.5 48.2 46.6 Other 33.8 35.4 53 57.4 57.7 49.6 58 Religious Politics 15.5 15.5 12 42 42 37.6 40.8 Personal 24.1 26.1 25 35.1 37 36.8 38 Sport 25.8 26.8 21.6 38.1 32.8 28.5 34.8 Other 30.6 27.4 26.4 46.4 43.2 43.2 48.4 Sport Politics 36.4 30.7 46.9 48.7 48.3 43.1 44.6 Personal 25.3 24.5 40.7 43.8 42.3 43.6 44.5 Religious 20 19 30.8 29.2 31 40.2 44 Other 35.5 35.5 48.3 49 49.6 49 54.2 Other Politics 23.2 23.2 46.8 46.5 46.4 34.4 46.8 Personal 30.3 24.9 40.2 46.2 44.3 40.3 45.5 Religious 41.8 43 39.5 45.8 47.6 48.6 48.9 Sport 23.7 27.8 46.7 48.4 51.1 47.7 50.9 Table 2: The results of accuracy measurement of Arabic cross-domain sentiment analysis using the ArSentD-LEV dataset.", "Scenario 2: Domain adaptation across regional dialects.", "Table 3 summarizes the results obtained for cross-domain and cross-dialect as well as cross-domain and cross-dialect Arabic sentiment analysis using two regional dialects (Gulf and Levantine) and MSA data, covering three domains (books reviews, hotels reviews and Twitter).", "The overall obtained results show that the zero-shot transfer from AraBERT (ZS-BERT) outperforms previous state-of-the-art methods (PBLM and HTAN).", "Moreover, the evaluated domain adaptation methods on top of BERT improve AraBERT's performance for all evaluated scenarios.", "Besides, the results demonstrate that the performance of ZS-BERT method drops significantly in the cases of cross-domain as well as in cross-domain and cross-dialect scenarios.", "Nevertheless, the domain adaptation methods show more important improvements (an increment of 7.4% on average) in the scenarios mentioned above.", "The obtained results clearly show that our method surpasses the other methods for most target datasets and scenarios, except for some cases but the gap remains small.", "Scenario 3: Domain adaptation from MSA to Arabic dialects using social media data.", "Table 4 presents the domain adaptation results obtained Target Gulf Levantine Modern Standard Arabic Scenario data BRAD HARD TEAD Avg BRAD HARD TEAD Avg BRAD HARD TEAD Avg Cross-dialect PBLM 53.5 83.38 66.37 67.75 57.25 78.12 62.62 66.0 50.87 80.0 64.75 65.21 HATN 58.05 75.5 62.2 65.25 58.75 74.6 61.72 65.02 58.22 77.35 57.98 64.52 ZS-BERT 72.7 92.1 72.5 79.1 80.1 95.3 73.8 83.1 75.3 95.3 73.0 81.2 CORAL 77.0 94.1 73.3 81.5 81.1 95.4 73.6 83.4 80.3 96.6 75.5 84.1 MMD 77.1 94.4 73.1 81.5 81.9 96.3 74.1 84.1 80.1 97.1 76.8 84.7 DANN 77.6 94.3 72.9 81.6 81.4 95.8 75.6 84.3 79.1 96.4 74.5 83.3 Ours 78.4 94.3 73.0 81.9 82.3 96.5 76.6 85.1 79.6 96.4 74.9 83.6 Cross-domain PBLM 51.25 51.63 48.88 50.58 59.62 50.25 48.25 52.71 64.0 51.75 50.0 55.25 HATN 57.35 54.45 52.6 54.8 52.77 49.32 48.88 50.32 60.7 52.97 54.68 56.12 ZS-BERT 55.7 70.9 58.8 61.8 60.6 69.9 58.0 62.8 64.5 76.8 61.8 67.7 CORAL 62.9 82.3 60.6 68.6 66.1 74.1 59.9 66.7 66.3 78.3 66.9 70.5 MMD 62.3 73.0 61.5 65.6 64.4 75.8 59.4 66.5 67.4 82.6 66.1 72.0 DANN 62.9 80.1 59.8 67.6 62.4 77.1 62.5 67.3 66.6 78.6 66.4 70.5 Ours 65.3 85.1 62.5 71.0 64.5 79.1 62.3 68.6 69.8 93.3 66.8 76.6 PBLM 51.56 50.62 49.81 50.67 50.0 52.44 48.81 50.42 53.19 49.44 49.56 50.73 HATN 55.24 50.91 52.29 52.81 56.78 50.24 50.2 52.4 55.35 52.36 53.06 53.59 ZS-BERT 57.8 72.9 59.9 63.5 63.0 74.5 59.1 65.5 60.2 71.6 60.8 64.2 Cross-dialect CORAL 63.8 82.8 60.8 69.1 64.8 78.3 60.8 67.9 66.8 79.4 63.9 70.0 & MMD 63.6 82.0 60.5 68.7 64.4 79.4 60.9 68.3 66.4 77.9 64.1 69.5 Cross-domain DANN 63.2 75.8 62.3 67.1 63.7 77.2 62.1 67.7 65.2 80.2 65.2 70.2 Ours 64.9 85.6 61.7 70.8 64.6 86.5 61.2 70.8 66.8 79.4 65.8 70.6 Table 3: The results of accuracy measurement of cross-dialect and cross-domain as well as cross-domain & cross-dialect Arabic sentiment analysis using two regional dialects and MSA data, covering three domains (books, hotels, and Twitter).", "from MSA to Arabic dialects.", "In agreement with the previously obtained results, all domain adaptation methods outperform the ZS-BERT method for all evaluated datasets by an average increment of 4.9% .", "CORAL, MMD, and DANN achieve comparable performances for most dialectal datasets.", "Moreover, the overall comparison results show that our method outperforms all other domain adaptation methods.", "The overall obtained results of the evaluated scenarios show that our method improves the transfer performance from contextualized word embedding.", "Moreover, it achieves far better transfer performance than the state-of-the-art methods that are based on the bag-of-words representation or pretrained word embedding.", "Indeed, all BERT-based domain adaptation methods yield a far better transfer learning performance than both DANN_BOW and ADRL methods.", "Besides, our method achieves better performance than CORAL, MMD, and DANN, which are implemented on top of BERT module.", "These results can be explained by the fact that BERT captures a high-level representation of the input text (Devlin et al., 2019; Antoun et al., 2020) as well as the effectiveness of ALDA.", "In fact, the latter aligns both domain output distributions using adversarial training and captures the discriminative features of the target domain inputs throughout self-training (Chen et al., 2020).", "Moreover, using BERT as a feature generator allows the model to extract high-level shared features of the input data that are transferable across domains and dialects.", "For instance, the results show that training DANN on top of BERT model outperforms DANNBOW , trained using the bag-of-words text representation, or even state-of-the-art methods that are based on pivot features extraction (HATN and PBLM), by a large margin for both cross-domain and cross-dialect sentiment analysis (Table 2 and Table 1).", "To understand why our proposed method outperforms the previous methods, we perform an error analysis.", "In this error analysis we focus on two aspects: the misclassified instances by our system and the instances correctly predicted by our method which the other approaches fail.", "For the first aspect, the majority of misclassified samples correspond to very short sentences in the target dialect.", "Most of them are either idiomatic, offensive or sarcastic expressions that are specific to the target dialect and contains words that are distant from MSA : r(cid:136)(cid:0) /wAErp/, wf l(cid:152)(cid:0) CA(cid:139) (cid:143)A} /gAr Allh yEfw wSAfy/, (cid:160)A(cid:24)V fi(cid:155) /mlA THAn/ and (cid:145)CA(cid:27) (cid:145)rJ /crf xArf/ 2 .", "It is worth mentionning that the other evaluated methods also misclassify these samples.", "For the second aspect, we have checked the cases where our method correctly predicts the instances labels while the other methods fail.", "Overall, we notice that the zero-shot predictions were biased to the distribution of the source data, as example the ArSAS dataset contains 63% of negative instances.", "MMD, CORAL and DANN overcome this issue by aligning the distribution of source and target features, which improves the results on the target domain.", "Meanwhile, they tend to misclassify reviews that convey multiple sentiment polarities, as the case for hotel reviews or books reviews, where users tend to express their negative and positive sentiments in the same review.", "Table 8 (Section B) shows a sample of these instances.", "Our method outperforms these DA methods since it relies on a noise-correcting discriminator that generates different noise vectors for the source and the target domain and learns a confusion matrix in an adversarial manner.", "By correcting the noise in pseudo labels of the target domain using the confusion matrix, we can achieve a class-wise feature alignment of the source and the target domains.", "Nevertheless, the other evaluated DA methods align the output features of the source and the target domain in class agnostic fashion.", "In this work, we have introduced an unsupervised domain adaptation method for Arabic cross-2", "domain and cross-dialect sentiment analysis based on the pretrained AraBERT language model and the Adversarial-Learned Loss for Domain Adaptation (ALDA).", "We have performed several experiments to investigate the performance of our method as well as several state-of-the-art methods, adopting both the coarse-grained and the fine-grained taxonomies of Arabic dialects.", "Moreover, we have studied the performance of domain adaptation from the MSA to Arabic dialects using social media data.", "The overall obtained results showed that domain adaptation methods outperform zero-shot transfer from BERT model by a large margin.", "Furthermore, our method achieved a very promising performance and surpassed the evaluated methods on most test datasets.", "In future work, we plan to investigate domain adaptive pre-training by collecting unlabeled data for target domains and fine-tuning AraBERT using the MLM objective.", "The aim is to study the performance of our method using domain aware language model.", "Since the zero-shot transfer performance using BERT model drops significantly in cross-domain sentiment analysis experiments, we believe that training domain adaptation methods on top of domain aware BERT model will lead to improved performance.", "We also plan to study domain adaptation from rich-resource languages such as English to Arabic language and its dialects." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "other", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "result", "objective", "objective", "result", "method" ]
[ "Code-switching is the use of more than one language in the same conversation or utterance.", "Recently, multilingual contextual embedding models, trained on multiple monolingual corpora, have shown promising results on cross-lingual and multilingual tasks.", "We present an evaluation benchmark, GLUECoS, for code-switched languages, that spans several NLP tasks in English-Hindi and English-Spanish.", "Specifically, our evaluation benchmark includes Language Identification from text, POS tagging, Named Entity Recognition, Sentiment Analysis, Question Answering and a new task for code-switching, Natural Language Inference.", "We present results on all these tasks using cross-lingual word embedding models and multilingual models.", "In addition, we fine-tune multilingual models on artificially generated code-switched data.", "Although multilingual models perform significantly better than cross-lingual models, our results show that in most tasks, across both language pairs, multilingual models fine-tuned on code-switched data perform best, showing that multilingual models can be further optimized for code-switching tasks.", "Code-switching, or code-mixing, is the use of more than one language in the same utterance or conversation and is prevalent in multilingual societies all over the world.", "It is a spoken phenomenon and is found most often in informal chat and social media on the Internet.", "Processing, understanding, and generating code-mixed text and speech has become an important area of research.", "Recently, contextual word embedding models trained on a large amount of text data have shown state-of-the-art results in a variety of NLP tasks.", "Models such as BERT (Devlin et al., 2018) and its multilingual version, mBERT, rely on large amounts of unlabeled monolingual text data to build monolingual and multilingual models that can be used for downstream tasks involving limited labelled data.", "(Wang et al., 2018) propose a Generalized Language Evaluation Benchmark (GLUE) to evaluate embedding models on a wide variety of language understanding tasks.", "This benchmark has spurred research in monolingual transfer learning settings.", "Data and annotated resources are scarce for code-switched languages, even if one or both languages being mixed are high resource.", "Due to this, there is a lack of standardized datasets in code-switched languages other than those used in shared tasks in a few language pairs.", "Although models using synthetic code-switched data and cross-lingual embedding techniques have been proposed for code-switching (Pratapa et al., 2018a), there has not been a comprehensive evaluation of embedding models across different types of tasks.", "Furthermore, there have been claims that multilingual models such as mBERT are competent in zero-shot cross lingual transfer and code-switched settings.", "Though comprehensively validated by (Pires et al., 2019) in the case of zero-shot transfer, the probing in code-switched settings was limited to one dataset of one task, namely POS Tagging.", "To address all these issues and inspired by the GLUE (Wang et al., 2018) benchmark, we propose GLUECoS, a language understanding evaluation framework for Co deS witched NLP.", "We include five tasks from previously conducted evaluations and shared tasks, and propose a sixth, Natural Language Inference task for code-switching, using a new dataset 1 (Khanuja et al., 2020).", "We include tasks varying in complexity ranging from word-level tasks [ Language Identification (LID); Named Entity Recognition (NER) ], syntactic tasks [ POS 1 we use a subset of the original corpus as available to us at the time of experimentation tagging ], semantic tasks [ Sentiment Analysis; Question Answering ] and finally a Natural Language Inference task.", "Where available, we include multiple datasets for each task in English-Spanish and English-Hindi.", "We choose these language pairs, not only due to the relative abundance of publicly available datasets, but also because they represent variations in types of code-switching, language families, and scripts between the languages being mixed.", "We test various cross-lingual and multilingual models on all of these tasks.", "In addition, we also test models trained with synthetic code-switched data.", "Lastly, we fine-tune the best performing multilingual model with synthetic code-switched data and show that in most cases, its performance exceeds the multilingual model, highlighting that multilingual models can be further optimized for code-switched settings.", "The main contributions of our work are as follows: We point out the lack of standardized datasets for code-switching and propose an evaluation benchmark GLUECoS, which can be used to test models on various NLP tasks in English-Hindi and English-Spanish.", "In creating the benchmark, we highlight the tasks that are missing from code-switched NLP and propose a new task, Natural Language Inference , for code-switched data.", "We evaluate cross-lingual and pre-trained multilingual embeddings on all these tasks, and observe that pre-trained multilingual embeddings significantly outperform cross-lingual embeddings.", "This highlights the competence of generalized language models over cross lingual word embeddings.", "We fine-tune pre-trained multilingual models on linguistically motivated synthetic code-switched data, and observe that they perform better in most cases, highlighting that these models can be further optimized for code-switched settings.", "The rest of the paper is organized as follows.", "We relate our work to prior work to situate our contributions.", "We introduce the tasks and datasets used for GLUECoS motivating the choices we make.", "We describe the experimental setup, with details of the models used for baseline evaluations.", "We present the results of testing all the models on the benchmark and analyze the results.", "We conclude with a direction for future work and highlight our main findings.", "The idea of a generalized benchmark for code-switching is inspired by GLUE (Wang et al., 2018), which has spurred research in Natural Language Understanding in English, to an extent that a set of harder tasks have been curated in a follow-up benchmark, SuperGLUE (Wang et al., 2019) once models beat the human baseline for GLUE.", "The motivation behind GLUE is to evaluate models in a multi-task learning framework across several tasks, so that tasks with less training data can benefit from others.", "Although our current work does not include models evaluated in a multi-task setting, we plan to implement this in subsequent versions of the benchmark.", "There have been shared tasks conducted in the past as part of code-switching workshops co-located with notable NLP conferences.", "The first and second workshops on Computational Approaches to Code Switching (Diab et al., 2014, 2016) conducted a shared task on Language Identification for several language pairs (Solorio et al., 2014; Molina et al., 2016).", "The third workshop (Aguilar et al., 2018) included a shared task on Named Entity Recognition for the English-Spanish and Modern Standard Arabic-Egyptian Arabic language pairs(Aguilar et al., 2019).", "The Forum for Information Retrieval Evaluation (FIRE) aims to meet new challenges in multilingual information access and has conducted several shared tasks on code-switching.", "These include tasks on transliterated search , (Roy et al., 2013; Choudhury et al., 2014) code-mixed entity extraction (Rao and Devi, 2016) and mixed script information retrieval (Sequiera et al., 2015; Banerjee et al., 2016).", "Other notable shared tasks include the Tool Contest on POS Tagging for Code-Mixed Indian Social Media at ICON 2016 (Jamatia et al., 2016), Sentiment Analysis for Indian Languages (Code-Mixed) at ICON 2017 (Patra et al., 2018) and the Code-Mixed Question Answering Challenge (Chandu et al., 2018a).", "Each of the shared tasks mentioned above attracted several participants and have led to follow up research in these problems.", "However, all tasks have focused on a single NLP problem and so far, there has not been an evaluation of models across several code-switched NLP tasks.", "Our objective with proposing GLUECoS is to address this gap, and determine which models best generalize across different tasks, languages and datasets.", "Some NLP tasks are inherently more complex than others for example, a Question Answering task that needs to understand both the meaning of the question and answer, is harder to solve by a machine than a word-level Language Identification task, in which a dictionary lookup can give reasonable results.", "Some datasets and domains may contain very little code-switching, while others may contain more frequent and complex code-switching.", "Similar languages, when code-switched, may maintain the word order of both languages, while other language pairs that are very different may take on the word order of one of the languages.", "With these in mind, our choice of tasks and datasets for GLUECoS are based on the following principles : We choose a variety of tasks, ranging from simpler ones, on which the research community has already achieved high accuracies, to relatively more complex, on which very few attempts have been made.", "We desire to evaluate models on language-pairs from different language families, and on a varied number of tasks, to enable detailed analysis and comparison.", "This led us to choose English-Hindi and English-Spanish , as we found researched upon datasets for almost all tasks in our benchmark for these language pairs.", "English and Spanish are written in the Roman script, while English-Hindi datasets can contain Hindi words written either in the original Devanagari script, or in the Roman script, thus adding script variance as an additional parameter to analyse upon.", "We include multiple datasets from each language pair where available, so that results can be compared across datasets for the same task.", "Due to the lack of standardized datasets, we choose to create our own train-test-validation splits for some tasks.", "Also, we use an off-the-shelf transliterator and language detector, where necessary, details of which can be found in Appendix A. Table 1 shows all the datasets that we use, with their statistics, while Table 2 shows the code-switching statistics of the data in terms of standardized metrics for code-switching (Gamback and Das, 2014; Guzman et al., 2017).", "Briefly, the code-mixing metrics include : Code-Mixing Index (CMI) : The fraction of language dependent tokens not belonging to the matrix language in the utterance.", "Average switch-points (SP Avg) : The average number of intra-sentential language switchpoints in the corpus.", "Multilingual Index (M-index) : A word-count-based measure quantifying the inequality of distribution of language tags in a corpus of at least two languages.", "Probability of Switching (I-index) :The proportion of the number of switchpoints in the corpus, relative to the number of language-dependent tokens.", "Burstiness : The quantification of whether switching occurs in bursts (randomly similar to a Poisson process), or has a more periodic character.", "Language Entropy (LE) : The bits of information needed to describe the distribution of language tags.", "Span Entropy (SE) : The bits of information needed to describe the distribution of language spans.", "In cases where the datasets have been a part of shared tasks, we report the highest scores obtained in each task as the State Of The Art (SOTA) for the dataset.", "However, note that we report this to situate our results in context of the same, and these cannot be directly compared, since each task's SOTA is obtained by varied training architecture, suited to perform well in one particular task alone.", "Language Identification is the task of obtaining word-level language labels for code-switched sentences.", "For English-Hindi we choose the FIRE 2013 (FIRE LID) dataset originally created for the transliterated search subtask (Roy et al., 2013).", "The test and development sets provided contain word-level language tagged sentences.", "For training we English-Hindi Corpus Sent (Train) Sent (Dev) Sent (Test) Sent (All) Fire LID (D) 2631 500 406 3537 UD POS (D) 1384 215 215 1814 FG POS (R) 2104 263 264 2631 IIITH NER (R) 2467 308 309 3084 SAIL Sentiment (R) 10080 1260 1261 12601 QA (R) 250 63 313 NLI (R) 1040 130 130 1300 English-Spanish Corpus Sent (Train) Sent (Dev) Sent (Test) Sent (All) EMNLP 2014 10259 1140 3014 14413 Bangor POS 2192 274 274 2758 CALCS NER 27366 3420 3421 34208 Sentiment 1681 211 211 2103 Table 1: Corpus Statistics.", "For English-Spanish we choose the dataset in (Solorio et al., 2014), provided as part of the LID shared task at EMNLP 2014.", "We report the highest score obtained for SPA-EN (Solorio et al., 2014) as the SOTA for this task.", "POS tagging includes labelling at the word level, grammatical part of speech tags such as noun, verb, adjective, pronoun, prepositions etc.", "For English-Hindi , we use two datasets.", "The first is the code-switched Universal Dependency parsing dataset provided by (Bhat et al., 2018) (UD POS).", "This corpus contains a transliterated version, where Hindi is in the Roman script, and also a corrected version in which Hindi has been manually converted back to Devanagari.", "We report the highest score obtained by (Bhat et al., 2018) as the SOTA for this task.", "The second English-Hindi dataset we use was part of the ICON 2016 Tool Contest on POS Tagging for Code-Mixed Indian Social Media Text (Ja-matia et al., 2016) (FG POS).", "We report the highest score obtained by (Anupam Jamatia, 2016)(report communicated directly by authors) as the SOTA for this task.", "For English-Spanish , of the two corpora utilised in (AlGhamdi et al., 2016), we choose the Bangor Miami corpus (Bangor POS) owing to the larger size of the corpus.", "We report the highest score obtained by (AlGhamdi et al., 2016) as the SOTA for this task.", "NER involves recognizing named entities such as person, location, organization etc. in a segment of text.", "For English-Hindi we use the Twitter NER corpus provided by (Singh et al., 2018) (IIITH NER).", "We report the highest score obtained by (Singh et al., 2018) as the SOTA for this task.", "For English-Spanish , we use the Twitter NER corpus provided as part of the CALCS 2018 shared task on NER for code-switched data (Aguilar et al., 2019) (CALCS NER).", "We report the highest score obtained by (Winata et al., 2019) as the SOTA for this task.", "Sentiment analysis is a sentence classification task wherein each sentence is labeled to be expressing a positive, negative or neutral sentiment.", "For English-Hindi we choose the sentiment annotated social media corpus used in the ICON 2017 shared task; Sentiment Analysis for Indian Languages (SAIL) (Patra et al., 2018).", "This corpus is originally language tagged at the word level with Hindi in the Roman script.", "We report the highest score obtained for HI-EN (Patra et al., 2018) as the SOTA for this task.", "For English-Spanish we choose the sentiment annotated Twitter dataset provided by (Vilares et al., 2016) which we split into an 8:1:1 train:test:validation split ensuring sentiment distribution.", "(Vilares et al., 2016) report an average F1 score of 58.9 on the same dataset, while (Pratapa et al., 2018b) report an F1 of 64.6 on the same, which we report as the SOTA for this dataset.", "We are not aware of future work done on this dataset.", "Question Answering is the task of answering a question based on the given context or world knowledge.", "We choose the dataset provided by (Chandu et al., 2018a) which contains two types of questions for En-Hi, one with context (185 article based questions) and one containing image based questions (774 questions).", "For the image based questions we use the DrQA Document Retriever module 2 to extract the most relevant context from Wikipedia.", "Since it is a code-switched dataset, context could 2 https://github.com/facebookresearch/DrQA not be extracted for all questions.", "Natural Language Inference is the task of inferring a positive (entailed) or negative (contradicted) relationship between a premise and hypothesis.", "While most NLI datasets contain sentences or images as premises, the code-switched NLI dataset we use contains conversations as premises, making it a conversational NLI task (Khanuja et al., 2020).", "Since this is a new dataset, we report our number as the SOTA for this task.", "We use standard architectures for solving each of the tasks mentioned above (Refer to Appendix B).", "We experiment with several existing cross lingual word embeddings that have been shown to perform well on cross lingual tasks.", "We also experiment with the Multilingual BERT (mBERT) model released by (Devlin et al., 2018).", "In a survey on cross lingual word embeddings, (Ruder et al., 2017) establish that various embedding methods optimize for similar objectives given that the supervision data involved in training them is similar.", "Based on this, we choose the following representative embedding methods that vary in the amount of supervision involved in training them.", "We use the MUSE library 3 to train both supervised and unsupervised word embeddings.", "The unsupervised word embeddings are learnt without any parallel data or anchor point.", "It learns a mapping from the source to the target space using adversarial training and (iterative) Procrustes refinement (Con-neau et al., 2017).", "The supervised method leverages a bilingual dictionary (or identical character strings as anchor points), to learn a mapping from the source to the target space using (iterative) Procrustes alignment.", "This method, proposed by (Hermann and Blun-som, 2014), leverages parallel data, based on the assumption that parallel sentences are equivalent in meaning and subsequently have similar sentence", "3 https://github.com/facebookresearch/MUSE", "representations.", "We use the BiCVM toolkit 4 to learn these embeddings.", "The parallel corpus we use for English-Spanish consists of 4.5M parallel sentences from Twitter.", "For English-Hindi , we make use of an internal parallel corpus consisting of roughly 5M parallel sentences.", "This method makes use of parallel corpora as well as word alignments to learn cross-lingual embeddings.", "(Luong et al., 2015) adapt the skip-gram objective originally proposed by (Mikolov et al., 2013) to a bilingual setting wherein a model learns to predict words cross-lingually along with the monolingual objectives.", "We make use of the fastal-ign toolkit 5 to learn word alignments given parallel corpora and use the BiVec toolkit 6 to learn the final BiSkip embeddings given the parallel corpora and the word alignments.", "The parallel corpora utilised to learn these are the same as those used to learn the BiCVM embeddings.", "We also experiment with skip-gram embeddings learnt from synthetically generated code-mixed data as proposed by (Pratapa et al., 2018b).", "We make use of the fasttext library 7 to learn the skip-gram embeddings.", "For English-Spanish , we obtain data from (Pratapa et al., 2018a) which consists of 8M synthetic code-switched sentences.", "For English-Hindi , we generate synthetic data from the IITB parallel corpus.", "8 We sample from the generated sentences obtained using Switch Point Fraction (SPF), as described in (Pratapa et al., 2018a), to obtain a GCM corpus of roughly 10M sentences.", "Multilingual BERT is pre-trained on monolingual corpora of 104 languages and has been shown to perform well on zero shot cross-lingual model transfer and code-switched POS tagging (Pires et al., 2019).", "Specifically, we use the bert-base-multilingual-cased model for our experiments.", "(Sun et al., 2019) show that fine-tuning BERT with in-domain data on language modeling improves", "4 https://github.com/karlmoritz/bicvm/ 5 https://github.com/clab/fast align 6 https://github.com/lmthang/bivec 7 https://fasttext.cc/ 8 http://www.cfilt.iitb.ac.in/iitb parallel/", "performance on downstream tasks.", "On similar lines, we fine-tune the mBERT model with synthetically generated code-switched data (gCM) and a small amount of real code-switched data (rCM), on the masked language modeling objective.", "The training curriculum we use in fine-tuning this model is similar to as proposed by (Pratapa et al., 2018a), which has been shown to improve language modeling perplexity.", "Although we train on real code-mixed data, it accounts for a small fraction (less than 5%) of the total code-mixed data used.", "Refer to Appendix C for training details.", "Tables 3-8 show the results of using the embedding techniques described above for each task and dataset.", "mBERT provides a large increase in accuracy as compared to cross-lingual techniques, and in most cases, the modified mBERT technique performs best.", "We do not experiment with baseline or cross-lingual embedding techniques for NLI, since we find that mBERT surpasses the other techniques for all other tasks.", "For NLI, as in the other cases, we find that modified mBERT performs better than mBERT.", "We hypothesize that this happens because code-switched languages are not just a union of two monolingual languages.", "The distributions and usage of words in code-switched languages differ from their monolingual counterparts, and can only be captured with real code-switched data, or synthetically generated data that closely mimics real data.", "(Glavas et al., 2019) point out how all cross-lingual word embedding methods optimize for bilingual lexicon induction.", "Each model is trained using different language pairs and different training and evaluation dictionaries, leading to it overfitting to the task it is optimizing for and failing in other cross-lingual scenarios.", "Also, the loss function in training cross-lingual word embeddings has a component where w 1 in one language predicts the context of its aligned word w 2 in the other language.", "However, in the case of code-switching, w 1 appear-9 The original task was language tagging and transliteration of Hindi words in the Roman script, while we report LID results for Hindi in Devanagari.", "An accuracy of 99.0 was obtained on the original subtask(Roy et al., 2013) 10 We create our own test split from the training data, since the test data is not publicly available 11 The original dataset contains multiple code-mixed pairs and there exists no language based segregation of the results.", "Since we only choose the EN-HI examples we report this as N/A Data Baseline Unsup.", "MUSE Sup.", "MUSE BiSkip SOTA 93.21 94.53 94.92 93.98 BiCVM GCM mBERT Mod.", "mBERT FIRE En-Hi 95.24 93.64 95.87 96.6 N/A 9 Baseline Unsup.", "MUSE Sup.", "MUSE BiSkip SOTA 92.95 92.86 93.39 92.79 BiCVM GCM mBERT Mod.", "mBERT EMNLP En-Es 91.47 92.42 95.97 96.24 94.0 Table 3: LID results (F1) Data Baseline Unsup.", "ing in the context of w 2 may not be natural.", "This clearly highlights the need to learn cross-lingual embeddings keeping code-mixed language processing as an optimization objective.", "The results using mBERT cannot be directly compared to the cross-lingual models because of the difference in the magnitude of data involved in training.", "Also, due to the fact that mBERT is trained on 104 languages together, with massive amounts of data for a large number of epochs, it learns several common features better providing for a well represented common embedding space.", "The training data used for training the cross-lingual embeddings is restricted to Twitter and query logs, while mBERT is trained on the entire wiki dump.", "Overall, the cross-lingual and mBERT models perform better for English-Spanish as compared to English-Hindi .", "This could be due to several reasons.", "English and Spanish are similar languages, with both mostly retaining individual word order while code-switching, which is not the case for English and Hindi.", "Romanized Hindi does not use standardized spellings, and errors made by the transliterator could have influenced the results.", "We use Twitter and social media data to train cross lingual word embeddings for English-Spanish which are similar in domain to the task datasets, while we use the IITB and query-based parallel corpora for English-Hindi which is generic in domain, constrained by the available resources at hand.", "We find that for most tasks, modified mBERT performs better than mBERT.", "In cases where this is not true (QA En-Hi; FG En-Hi), the difference in accuracy between the two models is small.", "This could be attributed to errors made by the transliterator or corpus differences, but in general we observe that the modified En-Hi mBERT model does not significantly outperform the base mBERT model.", "Given the promising results obtained by modified mBERT, it would be interesting to pre-train a language model for code-switched data which is trained on the monolingual corpora of languages involved and fine-tuned on GCM as proposed, to compare against fine-tuning mBERT itself, which is trained on multiple languages.", "We find that accuracies vary across tasks in the GLUECoS benchmark, and except in the case of LID, code-switched NLP is far from solved.", "This is particularly stark in the case of Sentiment and NLI, which are three and two way classification tasks respectively.", "Modified mBERT performs only a little over chance, which shows that we are still in the early days of solving NLI for code-switched languages, and also indicates that our models are far from truly being able to understand code-switched language.", "In this paper, we introduce the first evaluation benchmark for code-switching, GLUECoS.", "The benchmark contains datasets in English-Hindi and English-Spanish for six NLP tasks LID, POS tagging, NER, Sentiment Analysis, Question Answering and a new code-switched Natural Language Inference task.", "We test various embedding techniques across all tasks and datasets and find that multilingual BERT outperforms cross-lingual embedding techniques on all tasks.", "We also find that for most datasets, a modified version of mBERT that has been fine-tuned on synthetically generated code-switched data with a small amount of real code-switched data performs best.", "This indicates that while multilingual models do go a long way in solving code-switched NLP, they can be improved further by using real and synthetic code-switched data, since the distributions in code-switched languages differ from the two languages being mixed.", "In this work, we use standard architectures to solve each NLP task individually and vary the embeddings used.", "In future work, we would like to experiment with a multi-task setup wherein tasks with less training data can significantly benefit from those having abundant labelled data, since most code-switched datasets are often small and difficult to annotate.", "We experiment with datasets having varied amounts of code-switching and from different domains and show that some tasks, such as LID and POS tagging are relatively easier to solve, while tasks such as QA and NLI have low accuracies.", "We would like to add more diverse tasks and language pairs to the GLUECoS benchmark in a future version.", "All the datasets used in the GLUECoS benchmark are publicly available, and we plan to make the NLI dataset available for research use.", "We hope that this will encourage researchers to test multilingual, cross-lingual and code-switched embedding techniques and models on this benchmark." ]
[ "abstain", "abstain", "method", "objective", "result", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "result", "objective", "objective", "result", "objective", "result", "abstain", "objective", "objective", "method", "result", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "objective", "abstain", "result", "result", "abstain", "objective", "method", "objective", "method", "method", "method" ]
[ "Sandipan Sikdar RWTH Aachen University sandipan.sikdar@ cssh.rwth-aachen.de", "Abstract", "In this paper, we introduce Integrated Directional Gradients ( IDG ), a method for attributing importance scores to groups of features, indicating their relevance to the output of a neural network model for a given input.", "The success of Deep Neural Networks has been attributed to their ability to capture higher level feature interactions.", "Hence, in the last few years capturing the importance of these feature interactions has received increased prominence in ML interpretability literature.", "In this paper, we formally define the feature group attribution problem and outline a set of axioms that any intuitive feature group attribution method should satisfy.", "Earlier, cooperative game theory inspired axiomatic methods only borrowed axioms from solution concepts (such as Shapley value) for individual feature attributions and introduced their own extensions to model interactions.", "In contrast, our formulation is inspired by axioms satisfied by characteristic functions as well as solution concepts in cooperative game theory literature.", "We believe that characteristic functions are much better suited to model importance of groups compared to just solution concepts.", "We demonstrate that our proposed method, IDG , satisfies all the axioms.", "Using IDG we analyze two state-of-the-art text classifiers on three benchmark datasets for sentiment analysis.", "Our experiments show that IDG is able to effectively capture semantic interactions in linguistic models via negations and conjunctions.", "In the last decade Deep Neural Networks (DNN) have been immensely successful.", "Much of this success can be attributed to their ability to learn from complex higher order interactions from raw features (Goodfellow et al., 2016).", "This success of DNNs has led to them being increasingly adopted Equal contribution for algorithmic decision making.", "This in turn has led to increasing concerns over explainability and interpretability of these models, given the important role they are beginning to take in society (Selbst and Barocas, 2018).", "One area of work that has emerged in recent years is that of black box model explanation strategies that explain the output of a DNN for a given input using feature attribution scores or saliency maps (Sundararajan et al., 2017; Shrikumar et al., 2017).", "Numerous studies have been published in recent years proposing different strategies to answer the question which features in the input were most important in deciding the output of the DNN?", "However, modern DNNs take as input raw data as features, and learn from higher order interaction of those features.", "Thus in the past year a number of studies have instead focused on explaining feature interactions rather than explaining individual features (Chen and Jordan, 2020; Jin et al., 2019; Sundararajan et al., 2020; Chen et al., 2020; Tsang et al., 2020).", "One issue that remains, however, is that given two methods for attributing importance scores, it is not entirely straight forward to objectively compare them.", "As has been noted by earlier studies (Sun-dararajan et al., 2017), if the output of an attribution method seems non-intuitive it is not easy to answer if that is caused by", "(i) limitations of the attribution method,", "(ii) limitations of the DNN model being explained, or", "(iii) limitation of the data on which the DNN model was trained.", "Like multiple previous studies (Chen and Jordan, 2020; Sundararajan et al., 2020; Tsang et al., 2020) we take an axiomatic approach to this problem, whereby we first define the set of properties/axioms that a good solution must satisfy, followed by development of a solution that satisfies those axioms.", "interaction attribution) presented in this study is called Integrated Directional Gradients or IDG .", "Like multiple earlier methods in this area, IDG is a cooperative game theory inspired method.", "However, unlike earlier cooperative game theory inspired axiomatic methods which only borrowed axioms from solution concepts (such as Shapley value) for individual feature attributions and introduced their own extensions to model interactions, our formulation is inspired by axioms satisfied by well behaved characteristic functions as well as solution concepts in cooperative game theory literature.", "We find that well behaved characteristic functions provide a much simpler and intuitive framework for defining axioms for group attributions.", "We apply IDG on state-of-the-art models on the NLP domain.", "As part of its input IDG requires a set of meaningful feature sets, that have a hierarchical structure (Section 2.1).", "In this paper we use parse tree of sentences to construct the meaningful feature structures.", "Figure 1 shows an illustrative example of the nature of explanations and attributions computed using IDG .", "The major contributions of the current work are as follows: First, we formally define the feature group attribution problem as an extension to the feature attribution problem (Section 2.1).", "Second, we state a set of axioms that a well behaved feature group attribution method should satisfy (Section 2.2).", "Third, we present the method of Integrated Directional Gradients or IDG as a solution to the feature group attribution problem that satisfies the stated axioms (Section 2.3).", "Fourth, we propose an efficient algorithm to compute IDG for a given set of feature groups with a hierarchical structure (Section 2.4).", "Finally, we compare IDG with other recently proposed related methods for computing feature interactions attribution.", "(Section 3).", "To facilitate reproducibility, the implementation of IDG has been made publicly available 1 .", "In this section we formally state the problem of assigning attribution scores to meaningful feature", "groups.", "Let f ( x ) be a deep neural network function, that takes as input a n dimensional real valued vector x R n and produces a real valued scalar output.", "Let A = { a 1 , a 2 , . . . , a n } refer to the set of features, with x i referring to the value of feature a i in feature vector x .", "Then the feature group attribution problem is defined as follows: Given an input x , a baseline b R n , and a family of meaningful feature subsets M P ( A ) , assign to every subset of features S A a value/importance score v ( S ) .", "Here, P ( A ) represents the power set of the feature set.", "The above formulation is inspired by cooperative game theory literature.", "Intuitively, we think of features as players in a co-operative game trying to help the DNN model reach its output.", "The objective then is to design a good value/importance function (characteristic function in cooperative game theory literature) for each feature subset (coalition of players).", "Note that the above formulation is very different from existing cooperative game theory inspired feature attribution methods.", "Most existing methods assume that the value/characteristic function exists and then compute a payoff assignment vector for individual features, typically using Shapley values.", "Similar to earlier studies, in our formulation we assume that the baseline b represents the zero input or absence of contribution from any feature.", "The family of meaningful feature subsets M captures the notion that not all subsets of features represent meaningful parts of input.", "Another intuitive way to think about this is that not all features can collaborate directly, but need to be part of groups that can directly collaborate.", "In general we will assume that M has a hierarchical containment structure, that is feature groups in M can be represented as a directed acyclic graph with tree being a special case.", "Further, we will also assume that every individual feature is in M that is { a i } M for i { 1 , 2 , . . . , n } and represents the leaf nodes in the hierarchy, while the set of all features is also in M that is A M and represents the root of the hierarchy.", "In this section we present a set of axioms that a well behaved value/importance function should satisfy.", "Note that, the following four axioms are variants of standard axioms for characteristic functions in cooperative game theory literature.", "Axiom 4 (Superadditivity) The value of the union of two disjoint sets of features is greater than or equal to the sum of the values of the two sets; if S T = then v ( S T ) v ( S ) + v ( T ) .", "Since the value function represents the importance of a set of features, which is intuitively a direction less quantity, the Non-Negativity axiom ensures that every feature has a non-negative value/importance score.", "Similarly, the Normality axiom ensures that the importance score assigned to the empty set of features is zero.", "Since in the current framework the features in a deep neural network collaborate, with the assumption that collaboration can only be beneficial, the axioms of Monotonicity and Superadditivity ensure that collaboration doesn't lead to diminished value/importance.", "Note that Superadditivity together with Non-Negativity implies Monotonicity.", "In a cooperative game, players cooperate to generate the maximum value.", "A sometimes implicit assumption in these games is that it is always possible for a player to do nothing, in which case they generate zero value.", "Thus if doing something generates negative value a rational player will always choose to do nothing.", "This is the essence of Axiom 1.", "In axiomatic ML explanation literature, features are thought of as players cooperating to predict the output.", "One can also think of the value provided by a feature (importance of the feature) as the information contained in the feature that is effectively used by the model.", "This view also supports assumption of Axiom 1 as quantities of information (entropy) is also a non-negative quantity.", "Axioms 13 are some of the foundational axioms of cooperative game theory (Chalkiadakis et al., 2011).", "While much mathematical theory has been published for computing solution concepts in games where these assumptions do not hold, we argue that those games themselves can be difficult to interpret and thus are less suitable for developing interpretability/explainability methods.", "The following three axioms are variations of axioms of the same name presented in the (Sundarara-jan et al., 2017).", "The modifications presented here are necessary to incorporate the complexities resulting from assigning attribution scores to groups of features rather than individual features.", "Axiom 5 (Sensitivity", "(a)) Let there be a feature a i such that, f ( x ) 6 = f ( b ) for every input feature vector x and baseline vector b that only differ in a i .", "Then v ( { a i } ) > 0 and v ( S ) > 0 for every set of features S such that a i S .", "Axiom 6 (Sensitivity", "(b)) Let there be a feature a j such that, f ( x ) = f ( b ) for every input feature vector x and baseline vector b that only differ in a j .", "Then v ( { a j } ) = 0 and v ( S ) = v ( S r { a j } ) for every set of features S such that a j S .", "In essence the axiom Sensitivity", "(a) ensures that features that does effect the output of the DNN are not assigned a zero value/importance.", "Consequently, any feature group that includes such a feature must also be assigned a non-zero value.", "Conversely, the axiom Sensitivity", "(b) ensures that any feature that does not effect the output of the DNN is assigned a zero value, and that it doesn't contribute any value to any feature group that it is included in.", "Axiom 7 (Symmetry Preservation) Two features a i and a j are said to be functionally equivalent if f ( x ) = f ( y ) for every pair of input vectors x and y such that x i = y j , x j = y i , and x k = y k for k 6 { i, j } .", "Two features a i and a j are said to be structurally equivalent with respect to a family of meaningful feature subsets M if a i S and S 6 = { a i } implies a j S for all feature subsets S M and vice versa.", "If two features a i and a j are both functionally and structurally equivalent and if the given input vector x and baseline vector b are such that x i = x j and b i = b j then v ( S { a i } ) = v ( S { a j } ) for every subset of features S A r { a i , a j } .", "The Symmetry Preservation axiom first defines two different types of feature equivalence: functional and structural.", "Two features are said to be functionally equivalent if swapping the values of those features doesn't effect the output of the DNN.", "Where as structural equivalence of features on the other hand refers to them having equivalent position in the structure imposed by the set of meaningful features M .", "Finally, the Symmetry Preservation axiom ensures that features that are both functionally and structurally equivalent contribute equal value/importance to all feature subsets they are included in.", "Axiom 8 (Implementation Invariance) Two neural networks f () and f () are functionally equivalent if f ( x ) = f ( x ) for all x .", "Let the value functions for them be denoted by v () and v () respectively.", "Then v ( S ) = v ( S ) for all subset of features S A .", "The Implementation Invariance axiom simply ensures that different implementations of the same DNN function result in same value/importance assignment to all feature subsets.", "In this section we present a solution to the fea-ture group attribution problem that we call the Integrated Directional Gradients method or IDG .", "This method is inspired by the Integrated Gradients method (Sundararajan et al., 2017) and by Harsanyi dividends (Harsanyi, 1963) in cooperative game theory.", "The high level idea of the method is to construct the value function in terms of the dividends generated by each meaningful feature subset.", "In this formulation, each meaningful feature group contributes additional value to the DNN model, that we call dividend of the group.", "The dividend of a feature group S is represented by d ( S ) and d ( S ) [0 , 1) .", "The dividend of a single feature is also its value and a measure of its importance.", "One of the simplest measures of importance of a feature is the partial derivative of the DNN function with respect to the feature.", "The partial derivative also has an intuitive notion that it represents the amount of change in the output of the DNN function per unit change in the input, in the direction of the feature.", "However, as noted in the earlier studies (Sundararajan et al., 2017), due to effects such as gradient saturation, partial derivatives can't be directly used for measuring the importance of a feature.", "To alleviate this issue the authors of the Integrated Gradients method recommend taking a path integral of the partial gradient over the straight line path connecting the baseline b to the input x .", "For this study, we take a similar approach, and take the absolute value of the path integral of the partial gradient as the dividend of a single feature.", "The dividend of a group of features is distinct from its value and is the measure of the importance of the interaction of the features in the group.", "For this study we consider the directional derivative of the DNN function in the direction of the given set of features to be representative of the importance of the interaction of the given set of features.", "Similar to the single feature case this also has the intuitive notion that it represents the amount of change in the output of DNN function per unit change in input, in the direction of the subset of features.", "However, as in the case with single features, issues such as gradient saturation still need to be addressed for directional gradients as well.", "Thus we propose to use absolute value of IDG , which is the path integral of the directional gradient over the straight line path from the baseline b to the input x as the dividend of the feature group.", "Further, the sign of IDG may be used to signify the nature of contribution (positive or negative) to model output.", "z si = ( x i b i if a i S 0 otherwise (1) S f ( x ) = f ( x ) z s where z s = z s k z s k (2) IDG( S ) = Z 1 =0 S f ( b + ( x b )) d (3) d ( S ) = | IDG( S ) | Z if S M 0 otherwise (4) Z = XS M | IDG( S ) | (5) v ( S ) = XT { T | T S S M } d ( T ) (6) Equations 1 to 6 describe the process of computing the value/importance v ( S ) of a subset of features using the IDG method.", "Given a feature subset S first the feature subset difference vector z s is computed from the input feature vector x and the baseline vector b .", "Next, IDG( S ) is computed by integrating over the directional derivative, in the direction of z s over the straight line path from the baseline b to the input x .", "The dividend d ( S ) of the feature subset S is then computed by normalizing the absolute value of IDG( S ) over all meaningful subsets, such that the sum of the dividends of all meaningful features subsets add up to 1 .", "Finally the value v ( S ) of the given feature subset S is computed by adding up the dividends of all the meaningful subsets contained in S , including itself.", "Similar to (Sundararajan et al., 2017), we approximate the integral in IDG, by simply summing over the gradients at points occurring at small intervals along the path from baseline b to the input x .", "The approximated IDG( S ) is computed as: AIDG( S ) = 1 m + 1 m X k =0 S f (cid:18) b + k m ( x b ) (cid:19) (7) Here m denotes the number of steps in the Reimann approximation of the integral.", "We illustrate the computation of attribution scores using an example sentence Frenetic but not really funny taken from SST dataset (Fig-ure 1).", "The task is sentiment classification and the inferred class for this sentence is negative.", "The model used for classification is XLnet-base (refer to Section 3 for details on dataset, model and training procedure).", "We leverage the constituency parse tree of the sentence to obtain meaningful feature groups.", "Note that XLnet tokenizer uses byte pair encoding.", "Hence the word Frenetic is further decomposed into Fre, net and ic.", "Each token is further represented by an embedding of size 768.", "The value function is calculated in a bottom-up manner starting from each embedding dimension of the constituent tokens (referred as d i in Figure 1).", "These are then combined to obtain the value function score for each token.", "We then follow the parse tree to calculate the score for each phrase.", "For example the score for phrase Frenetic but is 0.407 while that of not really funny is 0.454.", "We now propose a polynomial time dynamic programming Algorithm (1) for calculating the attribution score (i.e., value function v ) for all the meaningful subsets in M for a given input x and a baseline b .", "First, f is calculated for each of the m + 1 intermediate positions between x and b .", "Next we compute AIDG( S ) for all feature groups in M .", "This is followed by the computation of Z , which is 2 Detailed proofs are available in Appendix.", "simply the sim of the AIDG ( S ) scores for reach of the meaningful subsets.", "Given Z and the individual scores the divided d ( S ) can easily be computed using Eq.", "4. Finally, given the dividend of all meaningful subsets of S is known, the value function v ( S ) for each of the meaningful subsets of S can be computed using Eq.", "6.", "The overall time complexity of Algorithm 1 is O ( m ( F + B + V | A | ) + V + E ) , where F and B are the time complexity of a single forward and backward pass of the neural network, V and E are, respectively, the number vertices and edges in the graph structure induced by the family of meaningful feature subsets M , | A | is the number of features, and m the number of approximation steps used to compute AIDG( S ) .", "For more details on the complexity result, refer to Appendix.", "It has been noted that when a DNN explanation method returns a non-intuitive result, it is not possible to disentangle which part of the pipeline training data, trained model, or the explanation method is to blame for the result (Sundarara-jan et al., 2017).", "Thus many studies (Sundararajan et al., 2017; Chen and Jordan, 2020; Sundararajan et al., 2020; Tsang et al., 2020) have taken the axiomatic strategy instead to compare methods qualitatively.", "Taking a similar approach, we present in Table 1 a qualitative comparison of recent feature interaction attribution methods most similar to our work.", "We group the comparison into four major categories.", "First , in most cooperative game theory literature players are assumed to cooperate.", "It is thus intuitive that more cooperation will not lead to lesser benefit, and it is generally assumed that the grand coalition will form (Chalkiadakis et al., 2011).", "While there are mathematical formulations that work in absence of this assumption, we argue that they lead to non-intuitive results when applied to the task of feature interaction attribution.", "These assumptions are manifested by well-behavedness properties of the characteristic/value function.", "In Table 1 we see that existing cooperative game theory inspired methods generally ignore this aspect when computing importance attributions.", "Second , to compute the effect of a model in absence of a feature, attribution methods generally mask out the feature, generally replacing it with a ZERO or PAD token.", "It has been noted that this requires the DNN model to be evaluated in an region of the input space for which it has not received any training data and for which its accuracy was never evaluated (Sundararajan et al., 2017; Kumar et al., 2020).", "Thus the results that model produces for these out-of-distribution inputs is questionable.", "In Table 1 we see that all existing methods compute their attributions by evaluating the model for these out-of-distribution inputs.", "Third , in a cooperative game theoretic setting when players (here features) are assumed to cooperate, it is intuitive that as the size of the coalition grows the coalition will not become less important.", "This is the key intuition behind Axioms 14.", "However, In Table 1 we see that none of the existing methods ensure that their attributions adhere to this key intuition.", "Finally , cooperative game theory based methods generally ensure that axioms of Completeness (a.k.a. Efficiency), Symmetry Preservation, Linearity, and Sensitivity (a.k.a Null/Dummy player) are warranted by their attributions.", "In this paper we follow the lead of (Sundararajan et al., 2017) and use the nomenclature from (Aumann and Shapley, 2015), which additionally introduces the axiom of Implementation Invariance.", "In Table 1, we see that for LS-Tree (Chen and Jordan, 2020), Shapley-Taylor Interaction Index (Sundararajan et al., 2020), and Archipelago (Tsang et al., 2020), which are cooperative game theory inspired methods, these assumptions hold.", "However for SCD/SOC (Jin et al., 2019) and HEDGE (Chen et al., 2020) which are not axiomatic formulations, these assumptions do not hold.", "For our method, IDG, all but the axiom of Linearity holds.", "In Section 5.2 we argue that this is not a major limitation and refer to existing literature that even argues for doing away with the Linearity axiom.", "We deploy our model for the task of sentiment classification across three different datasets Stanford Sentiment Treebank (SST) (Socher et al., 2013), Yelp reviews (Zhang et al., 2015) and IMDB (Maas et al., 2011).", "For each dataset, we train three state-of-the-art models XLnet-base (Yang et al., 2019), XLnet-large (Yang et al., 2019) and BERT-itpt (Sun et al., 2019).", "We use the same hyperparameter configuration as mentioned in the original papers.", "They are summarized in Appendix as well.", "The performance of these models are summarized in Table 2. 4 Results To precisely visualize the interactions between phrases, we search over the test examples for instances of negations.", "We follow the methodology proposed in (Murdoch et al., 2018).", "In specific, we look into the parse tree for each review and check if the left child consists of a negation phrase (e.g., lacks, never etc.) in the first two words and the right child has a positive or a negative sentiment.", "Since for SST, each phrase is also annotated with their corresponding sentiment labels in the form of a constituency parse tree, this can be easily ob-Axioms/Properties SCD/SOC HEDGE LS-Tree STI Archipelago IDG Well-Behaved Characteristic Function NA NA In Distribution Evaluations Non-Negativity Normality Monotonicity Superadditivity Sensitivity Symmetry Preservation Linearity Completeness Implementation Invariance Table 1: A comparison of axiomatic guarantees / properties of feature interaction attribution methods: SCD/SOC (Jin et al., 2019), HEDGE (Chen et al., 2020), LS-Tree (Chen and Jordan, 2020), Shapley-Taylor Interaction Index (STI) (Sundararajan et al., 2020), Archipelago (Tsang et al., 2020), and IDG (proposed method).", "tained.", "For Yelp and IMDB, we look for presence of negation phrases in the reviews and then manually select 100 such examples from the filtered set.", "Since the parse trees for the reviews are not explicitly available for Yelp and IMDB, we deploy a state-of-the-art constituency parser (Mrini et al., 2019) to obtain them.", "We illustrate with one example each from SST and Yelp datasets in Figures", "2(a) and", "2(b) respectively.", "Additional examples can be found in Appendix.", "For Figure", "2(a) the classification model is XLnet-base and the ground truth as well as the inferred class is negative.", "The first part ( Though everything might be literate and smart ) has a positive sense.", "But when appended with the second part ( it never took off and always seemed static ), a negative sense is manifested.", "This is captured by the classification model as demonstrated by our framework.", "For the example in Figure", "2(b), the classifier model is BERT-itpt and the inferred as well as the ground-truth class is negative.", "This example consists of two sentences while the first one Nice atmosphere has a positive sense, when combined with the second sentence Cheeseburger was not at all that , the overall sense turns negative.", "This is again conveniently manifested in the scores assigned by our framework.", "We also report the results on IMDB reviews (Maas et al., 2011) in Appendix.", "As noted by (Sundararajan et al., 2017), when the results of an explanation method is non-intuitive, it is not obvious which part of the ML pipeline the data, the model being explained, the explanation method is to be blamed and by how much.", "Due to this issue many authors (Sundararajan et al., 2017; Chen and Jordan, 2020; Sundararajan et al., 2020; Tsang et al., 2020) have chosen to take the axiomatic/theoretical path, where they state the properties of the proposed method and compare explanation methods based on the axioms/properties they satisfy.", "Nevertheless, many recent studies (Singh et al., 2018; Jin et al., 2019; Chen et al., 2020) have proposed new explanation methods and provided evaluations using quantitative metrics such as AOPC (Nguyen, 2018), Log Odds (Shrikumar et al., 2017), and Cohesion Score (Jin et al., 2019).", "One common strategy is to perturb the input such as removing of Top-K most important words/features followed by measuring the drop in performance.", "We argue that these methods of evaluation have issues because they generally involve measuring model performance on out-of-distribution inputs.", "And as stated earlier, measuring the outputs of models on out-of-distribution inputs, that is inputs, on which the model has neither been trained or tested on, is questionable.", "The other strategy is to perturb the model such as by adding noise to model weights followed by measuring the drop in performance.", "(Hooker et al., 2019) proposed a similar solution for the input perturbation case as well, that is by retraining the model after perturbing all training samples.", "However, in this scenario if two explanation methods provided different explanations/attributions for the different models, it is not obvious if the models are to blame or the explanation methods.", "Similar issues exist for human judgement experiments as well.", "Due to the above issues for the current work we too have chosen to take the qualitative comparison path.", "One of the common axioms of solution concepts in cooperative game theory is Linearity.", "The axiom of Linearity (a.k.a Additivity) states that if the characteristic/value function has the form v ( S ) = v 1 ( S ) + v 2 ( S ) and 1 ( S ) and 2 ( S ) are the attributions due to v 1 ( S ) and v 2 ( S ) then the attribution due to v ( S ) should be given by ( S ) = 1 ( S ) + 2 ( S ) .", "During our design and experimentation we found that having the attributions normalized, that is v ( ) = 0 and v ( A ) = 1 , provided much more intuitive results.", "Such normalization, however, runs counter to the possibility of an attribution method that satisfies Linearity.", "Further, it has been argued by some game theorists that the axiom of Linearity was added as a mathematical convenience and also to constrain the attributions such that it is unique (Osborne and Rubinstein, 1994).", "Further, (Kumar et al., 2020) argue that enforcing such uniqueness constraints by this method limits the kind of models that can be explained by these attributions.", "Thus, IDG is also not an unique solution to the feature group attribution problem, due to its sacri-fice of Linearity.", "However, given that recent studies have found (Sundararajan and Najmi, 2020) that Shapley values can and have been used in many different ways, each of which claiming uniqueness, the importance of uniqueness claims is significantly diminished.", "Feature attribution based method.", "These methods essentially assign importance scores to individual features thereby explaining the decisions of the classifier model.", "The scores are mostly calculated by either backpropagating a custom relevance score (Sixt et al., 2020) or directly using the gradients.", "The gradient based methods aim to calculate the sensitivity of the inference function with respect to the input features and thereby measuring its importance.", "The method was first introduced in (Springenberg et al., 2015) and further investigated in (Selvaraju et al., 2017; Kim et al., 2019).", "(Sundararajan et al., 2017) adopts an axiomatic approach and deem it to be more suitable as the feature attribution methods are hard to evaluate empirically.", "The other set of methods usually backpropagates their custom relevance scores down to the input to identify relevance of an input feature (Bach et al., 2015; Shrikumar et al., 2017; Zhang et al., 2018).", "Unlike the gradient based methods, these are not implementation invariant (i.e., the back propagation process is architecture specific).", "Game theoretic aspect.", "(Lundberg and Lee, 2017) adopts results (shapely values in specific) from coalition game theory to obtain feature attribution scores.", "The key idea is to consider the features as individual players involved in a coalition game of prediction which is considered the payout.", "The payout then can be fairly distributed among the players (features) to measure their importance.", "This has been further explored in (Lundberg et al., 2020; Ghorbani and Zou, 2020; Sundararajan and Najmi, 2020; Frye et al., 2020).", "Quantifying feature interactions.", "The methods mentioned above fail to properly capture the importance of feature interaction.", "(Janizek et al., 2020) proposes to capture pair-wise interaction by building upon Integrated gradients framework.", "(Cui et al., 2020) learns global pair-wise interactions in bayesian neural networks.", "(Murdoch et al., 2018) introduces contextual decomposition to capture interaction among words in a text for a LSTM-based classifier.", "(Singh et al., 2018) further extends the method to other architectures.", "More recent research endeavors in this direction include (Tsang et al., 2020; Liu et al., 2020; Chen et al., 2020).", "We elaborate more on the methods closest to our work in section 3. 7 Conclusion In this paper we investigated the problem of feature group attribution and proposed a set of axioms that any framework for feature group attribution should fulfill.", "We then introduced IDG, a novel method, as a solution to the problem and demonstrated that it satisfies all the axioms.", "Through experiments on real-world datasets with state-of-the-art DNN based classifiers we demonstrated the effectiveness of IDG in capturing the importance of feature groups as deemed by the classifier.", "Sandipan Sikdar was supported in part by RWTH Aachen Startup Grant No.", "StUpPD384-20 .", "Parantapa Bhattacharya was supported in part by the Dense Threat Reduction Agency (DTRA) under Contract No.", "HDTRA1-19-D-0007 , by the National Science Foundation (NSF) under Grant No.", "CCF-1918656 , and by the Defense Advanced Research Projects Agency (DARPA) under Contract No.", "FA8650-19-C-7923 .", "The authors would also like to thank the Research Computing Center at University of Virginia for compute time grant on the Rivanna cluster." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "method", "abstain", "method", "abstain", "objective", "abstain", "method", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "other", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other" ]
[ "Sean Trott University of California, San Diego [email protected]", "Tiago Timponi Torrent Federal University of Juiz de Fora [email protected]", "[email protected]", "Human speakers have an extensive toolkit of ways to express themselves.", "In this paper, we engage with an idea largely absent from discussions of meaning in natural language un-derstandingnamely, that the way something is expressed reflects different ways of conceptualizing or construing the information being conveyed.", "We first define this phenomenon more precisely, drawing on considerable prior work in theoretical cognitive semantics and psycholinguistics.", "We then survey some dimensions of construed meaning and show how insights from construal could inform theoretical and practical work in NLP.", "Natural language is a versatile tool for allowing humans to express all manner of communicative intents, from simple descriptions of the entities and situations in their direct experience to elaborate rhetorical flights of fancy.", "Many NLP applications, such as information extraction, question answering, summarization, and dialogue systems, have restricted their scope to what one might call objective information contentrelatively uncontroversial facts that systems can infer from an utterance, store in a database and reason about.", "While it is tempting to equate such information with the meaning of an utterance, a large body of literature in linguistics and psycholinguistics argues that an utterance conveys much more than a simple set of facts: it carries with it a halo of intimations arising from the speaker's choices, including considerations of perspective, emphasis, and framing.", "That is, linguistic choices subtly color meaning ; far from merely conveying objective facts, they reflect how speakers conceptualize meaning and affect listeners' interpretations in predictable ways.", "Take, for example, this metaphor-rich portrayal of a newborn as a tyrant over her parental subjects: (1) Nora's arrival brought a regime change.", "Life under her adorable tyranny was filled with squawking, swaddling and ceaseless sleep-input-output cycles.", "We were relieved when she relaxed her tiny iron grip.", "This report of new parenthood describes a major life change along with everyday caregiver routines, but its emphasis is on the parents' experience of being suppressed ( under ) and controlled ( grip ) by a creature who is cast, variously, as a tyrant ( regime ), a bird ( squawk ), and a relentless machine ( sleep-input-output cycles , iron grip )albeit a (subjec-tively) adorable one.", "The power of linguistic choices to shape understanding is also evident in more mundane (and well-studied) examples: (2)", "a. Chuck bought a car from Jerry.", "Jerry sold a car to Chuck.", "Jerry paid Chuck for the car.", "b. I work at Microsoft.", "I work for Microsoft.", "c. The statue stands in the plaza.", "The statue is standing in the plaza.", "Each set includes sentences that convey roughly the same factsi.e. they could describe the same scenariobut nonetheless differ in various respects.", "The familiar framing differences between buy/sell/ pay (2a) focus attention on different participants and subevents in a commercial transaction.", "(2b) involves a subtler difference in emphasis, where the choice of at highlights the location of the work, while for evokes how that work benefits the employer.", "Grammatical marking can also shift event connotations, as illustrated by the stative vs. temporary contrast in (2c).", "Such distinctions illustrate the general phenomenon of construal , which we claim has been neglected in NLP.", "We believe that a proper recognition of construal would provide a unified framework for addressing a wide range of issues involving meaning and linguistic variation, opening the way to systems that more closely approximate (ac-tually) natural language.", "This paper surveys the theoretical and empirical landscape related to construal phenomena and makes the case for its relevance to NLP.", "After clarifying the terms adopted here (2), we lay out a few key dimensions of construed meaning (3) and then elaborate on some mechanisms of construal (4).", "A trio of case studies illustrate how different types of construal can challenge NLP systems (5).", "We end with some conclusions and suggestions for how to begin addressing these challenges (6).", "Our view of construal and its close companion meaning is rooted in both frame-based and cognitive semantic traditions.", "The notion that words and other linguistic units evoke background scenes along with specific perspectives on those scenes is captured by Fillmore's (1977) slogan, MEANINGS ARE RELATIVIZED TO SCENES .", "This idea has deeper consequences than merely assigning different semantic roles to examples like (2a).", "As Langacker (1993, p. 460) observes, any given situation can be viewed in multiple if not infinitely many ways. Starting from the same basic conceptual content. . . we can form an endless variety of specific conceptions by making alternate choices in regard to the many dimensions of construal.", "This view of linguistic meaningwhich we might call inherently multivalent is more flexible than in many theoretical and computational treatments, particularly truth-conditional approaches that liken meanings to facts in a database.", "The visual domain offers a more informative analog: a photographic or artistic rendering of a scene can vary in vantage point, viewing distance, objects in sight or in focus, color and lighting choices, etc. (Langacker, 1993; Talmy, 1988).", "Context matters, too: a painting hanging on a preschool wall may be received differently if displayed in a museum.", "Just as there is no one objective, context-independent depiction of a scene, there are many valid ways to present an idea through language.", "We thus extend Fillmore's slogan to include all kinds of conceptual content (beyond scenes); the broader communicative context; and the effect of choices made as part of the construal process: MEANINGS ARE RELATIVIZED TO CONTENT , CONTEXT AND CONSTRUAL .", "Conceptual content.", "We assume that linguistic units can evoke and combine all kinds of conceptual content, including open-ended world knowledge (entities, actions, events, relations, etc.) as well as more schematic structures often associated with grammar and function words.", "Crucially, concepts must also be amenable to certain kinds of transformation (e.g., shifts in perspective or granularity) as part of construal; see below.", "1 Communicative context.", "We take meaning to encompass scene-level entities and events, discourse-level information about the interlocutors and their communicative intents, and other phenomena straddling the (fuzzy) semantic-pragmatic boundary, related to attention (e.g., profiling and perspective) and conditions of usage falling under what Fillmore (1985) dubbed U-Semantics (in contrast to truth-oriented T-Semantics).", "2 Contextual factors (e.g., the interlocutors' identity, beliefs, goals, conceptual repertoire, cultural backgrounds) can radically alter construed meaning.", "On this view, meaning is not arbitrarily subjective, or merely intersubjective; it is also constrained by all aspects of the communicative context.", "Construal.", "We define construal as a dynamic process of meaning construction , in which speakers and hearers encode and decode, respectively, some intended meaning in a given communicative context.", "To do so, they draw on their repertoire of linguistic and conceptual structures, composing and transforming them to build coherent interpretations consistent with the speaker's lexical, grammatical, and other expressive choices.", "3 We take construal to be fundamental to all language use, though how much construal and what 1 We are not here concerned with precisely how concepts are represented or learned, since we believe the insights related to construal apply broadly across theoretical frameworks.", "2 For example, only U-Semantics can explain why the children are on the bus is preferred over the children are in the bus if the bus is in transit, despite referring to the same spatial relationship.", "3 Both speakers and hearers engage in construal: speakers, in choosing how to present the idea, experience or other content they wish to convey; hearers, in reconstructing that intended meaning.", "Words like analysis' and interpretation' should thus be understood as applying to meaning construction by either interlocutor.", "(We do not focus here on the many differences between comprehension and production.) kinds of construal vary across interpretations.", "4 In the simplest cases, the relevant components fit neatly together ( la compositional semantics).", "But many (or even most) utterances involve a myriad of disparate structuresconceptual, linguistic, and contextualthat may need to be transformed, (re)categorized, or otherwise massaged to be integrated into a single coherent whole.", "This conceptual flexibility is not arbitrary: the space of combinatorial options is delimited by construal operations defined with respect to certain privileged construal dimensions .", "A number of dimensions and operations have been proposed, many motivated by general cognitive processes; we will review some of these in 3, and illustrate how they are engaged during language use in 4.", "This inclusive, flexible view of meaning has broad implications for a wide variety of linguistic phenomena, and many parallels in prior workfar too many to address exhaustively here.", "We restrict our current scope in several ways: (1) While some aspects of context will be mentioned below, we do not address many phenomena related to pragmatic inference (e.g. politeness, indirect requests).", "(2) Though many construal dimensions are relevant cross-linguistically, we will not address typological patterns in the lexical, grammatical, and cultural conventions that influence construal.", "(3) We highlight construal phenomena that are psycholinguisti-cally attested and/or relevant to NLP research.", "Several (partial) taxonomies of construal dimensions have been proposed in the cognitive linguistics literature (Langacker, 1993; Talmy, 1988; Croft and Wood, 2000; Taylor, 1995; Casad, 1995); see Croft and Cruse (2004) for an overview.", "We will not attempt to reconcile their many differences in terminology and organization, but instead present selected dimensions most relevant for NLP.", "Languages have many ways of describing scenes from a specific PERSPECTIVE (or vantage point ).", "The spatial domain provides clear examples: a cup might be described as being left or right of some other object, depending on whose perspective is adopted; or explicitly marked as being on my/your/ 4 Conventionality plays an important role here: initially creative expressions may require less construal as they become entrenched and their meanings more efficiently accessed.", "her/Sue's left .", "Likewise, the same motion event can be described relative to differing deictic centers (e.g., the arrival in (1) can also be viewed as a departure from the hospital).", "Perspective can extend beyond the spatial domain.", "The use of past tense in (1) indicates the speaker's retrospective viewpoint.", "Differences in opinion, belief state or background have also been treated as perspective shifting.", "Talmy's (1988) taxonomy defines a broader version of PERSPECTIVE that includes distribution of attention.", "Descriptions of a static scene can adopt a dynamic perspective, evoking the experience of moving through the scene (There is a house every now and then through the valley); these descriptions can be even more explicit, as with fictive motion (The road runs through the valley) (Talmy, 1996; Matlock, 2004b).", "Psycholinguistic evidence.", "Grammatical person can affect which perspective a comprehender adopts when reading about an event (Bruny et al., 2009) and which actions they are most likely to remember (Ditman et al., 2010).", "Fictive motion can also influence the way comprehenders conceptualize a static scene (Matlock, 2004a,b).", "Relevant NLP research.", "Perspective is crucial for understanding spatial language, e.g. for robotics (5.2) and other kinds of situated language.", "Work on grounding referents from natural language descriptions has incorporated visual perspective as another source of information about the intended referent (Devin and Alami, 2016; Ros et al., 2010; Trafton et al., 2005).", "PROMINENCE (or salience ) refers to the relative attention focused on different elements in a scene (Langacker, 1993; Talmy, 1988).", "Languages have various devices for highlighting, or profiling , some elements over others (or leaving them implicit).", "For example, verbs like those in (2a) differ in which elements in a larger scene are preferentially expressed.", "Similarly, many spatial and temporal adpositions involve an asymmetric profiling of one entity relative to another; thus the painting is above the piano and the piano is below the painting describe the same situation but differ in focus.", "Verbal and constructional alternations also manipulate prominence: The active/passive pair Mi-crosoft employs me and I am employed by Microsoft differ in profiling the employer and speaker, respectively.", "Similarly, transitive I rolled the ball vs. intransitive The ball rolled differ in whether the ball-roller is even mentioned.", "Languages also differ systematically in how motion events are most idiomatically expressed, in particular in whether the main verb encodes (and foregrounds) the manner (English run ) or path (Spanish entrar ) of motion.", "Psycholinguistic evidence.", "A speaker's decisions about which features to encode in the main verb versus a satellite can influence which events comprehenders find most similar (Billman and Krych, 1998) and which features they tend to remember (Gennari et al., 2002).", "In other work, Fausey and Boroditsky (2010) found that descriptions of an accidental event using a transitive construction (She had ignited the napkin) led participants to assign more blame to the actor involved, and even demand higher finan-cial penalties, than descriptions using non-agentive constructions (The napkin had ignited).", "In language production, there are a number of factors influencing which construction a speaker chooses (e.g., current items in discourse focus (Bresnan et al., 2007), lexical and syntactic priming (Pickering and Ferreira, 2008)).", "Relevant NLP research.", "Recovering implicit information is widely studied in NLP, and deciding which information to express is key to NLG and summarization.", "We mention three examples exploring how choices of form lend prominence to certain facets of meaning in ways that strongly resonate with our claims about construal.", "Greene and Resnik (2009) show that syntactic framinge.g. active ( Prisoner murders guard ) vs. passive ( Guard is murdered )is relevant to detecting speaker sentiment about violent events.", "Hwang et al. (2017) present an annotation scheme for capturing adpositional meaning construal (as in (2b)).", "Rather than disambiguate the adposition with a single label, they separately annotate an adposition's role with respect to a scene (e.g. employment) and the aspect of meaning brought into prominence by the adposition itself (e.g., benefactive for vs. locative at ).", "This more flexibly accounts for meaning extensions and resolves some annotator difficulties.", "Rohde et al. (2018) studied the construction of discourse coherence by asking participants to insert a conjunction ( and , or , but , so , because , before ) where none was originally present, before an explicit discourse adverbial (e.g. in other words ).", "They found that some contexts licensed multiple alternative conjunctions, each expressing a different coherence relationi.e., distinct implicit relations can be inferred from the same passage.", "This speaks to the challenge of fully annotating discourse coherence relations and underscores the role of both linguistic and contextual cues in coherence.", "Concepts can be described at many levels of RESOLUTION from highly detailed to more schematic.", "We include here both specificity (e.g., pug < dog < animal < being ) and granularity (e.g., viewing a forest at the level of individual leaves vs. branches vs. trees).", "Lexical items and larger expressions can evoke and combine concepts at varying levels of detail (The gymnast triumphantly landed upright vs. A person did something).", "Psycholinguistic evidence.", "Resolution is related to basic-level categories (Rosch et al., 1976; Lakoff, 1987; Hajibayova, 2013), the most culturally and cognitively salient levels of a folk taxonomy.", "Speakers tend to use basic-level terms for reference (e.g., tree vs. entity / birch ), and basic-level categories are more easily and quickly accessed by comprehenders (Mervis and Rosch, 1981; Rosch et al., 1976).", "Importantly, however, what counts as basic-level depends on the speaker's domain expertise (Tanaka and Taylor, 1991).", "Speakers may deviate from basic-level terms under certain circumstances, e.g., when a more specific term is needed for disambiguation (Graf et al., 2016).", "Conceptualization is thus a flexible process that varies across both individual cognizers (e.g., as a function of their world knowledge) and specific communicative contexts.", "Relevant NLP research.", "Resolution is already recognized as important for applications such as text summarization and dialogue generation (Louis and Nenkova, 2012; Li and Nenkova, 2015; Ko et al., 2019a; Li et al., 2016; Ko et al., 2019b), e.g., in improving human judgments of informativity and relevance (Ko et al., 2019b).", "Also relevant is work on knowledge representation in the form of inheritance-based ontologies and lexica (e.g., FrameNet (Fillmore and Baker, 2009), ConceptNet (Liu and Singh, 2004)).", "CONFIGURATION refers to internal-structural properties of entities, groups of entities, and events,", "indicating their schematic shape and texture: multiplicity (or plexity ), homogeneity, boundedness, part-whole relations, etc. (Langacker, 1993; Talmy, 2000).", "To borrow an example from Croft (2012), a visitor to New England can describe stunning autumn leaves or foliage .", "Though both words indicate a multiplex perception, they exhibit a grammatical difference: the (plural) count noun leaves suggests articulated boundaries of multiple individuals, whereas the mass noun foliage suggests a more impressionistic, homogeneous rendering.", "This dimension includes many distinctions and phenomena related to aspect (Vendler, 1967; Com-rie, 1976), including whether an event is seen as discrete ( sneeze ) or continuous ( read ); involves a change of state ( leave vs. have ); has a defined endpoint ( read vs. read a book ); etc.", "Lexical and grammatical markers of configuration properties interact in complex ways; see discussion of count/ mass and aspectual coercion in 4.", "Psycholinguistic evidence.", "Differences in grammatical aspect can modulate how events are conceptualized (Matlock, 2011).", "Stories written in imperfective aspect are remembered better; participants are also more likely to believe that the events in these stories are still happening (Magliano and Schleich, 2000) and build richer mental simulations of these events (Bergen and Wheeler, 2010).", "In turn, these differences in conceptualization have downstream consequences, ranging from judgments about an event's complexity (Wampler and Wittenberg, 2019) to predictions about the consequences of a political candidate's behavior on reelection (Fausey and Matlock, 2011).", "The mass/count distinction has attested psychological implications, including differences in word recognition time (Gillon et al., 1999) (see Fieder et al. (2014) for a review).", "Relevant NLP research.", "Configurational properties are closely linked to well-studied challenges at the syntax-semantic interface, in particular nominal and aspectual coercion effects (4).", "Several approaches explicitly model coercion operations based on event structure representations (Moens and Steedman, 1988; Passonneau, 1988; Pulman, 1997; Chang et al., 1998), while others explore statistical learning of aspectual classes and features (Siegel and McKeown, 2000; Mathew and Katz, 2009; Friedrich and Palmer, 2014).", "Lexical resources have also been developed for aspectual annotation (Donatelli et al., 2018) and the count/ mass distinction (Schiehlen and Spranger, 2006; Kiss et al., 2017).", "The dimension of METAPHOR is broadly concerned with cross-domain comparison , in which speakers conceptualize two distinct structures in relation to one another (Langacker, 1993, p. 450).", "Metaphors have been analyzed as structured mappings that allow a target domain to be conceptualized in terms of a source domain (Lakoff and Johnson, 1980).", "Metaphors pervade language use, and exhibit highly systematic, extensible structure.", "For example, in English, events are often construed either as locations in space or as objects moving through space.", "Our experience of time is thus often described in terms of either motion toward future events (we're approaching the end of the year), or the future moving toward us (the deadline is barreling towards us) (Boroditsky, 2000, 2001; Hendricks and Boroditsky, 2017; Nez and Sweetser, 2006).", "Metaphor plays a role in our linguistic characterization of many other domains as well (Lakoff and Johnson, 1980).", "Psycholinguistic evidence.", "Different metaphors can shape a comprehender's representation about the same event or concept in radically different ways.", "Thibodeau and Boroditsky (2011) found that describing a city's crime problem as a beast or as a virus elicited markedly different suggestions about how best to address the problem, e.g., whether participants tended to endorse enforcementor reform-based solutions.", "Similar effects of metaphor on event conceptualization have been found across other domains, such as cancer (Hauser and Schwarz, 2015; Hendricks et al., 2018) and climate change (Flusberg et al., 2017) (see Thibodeau et al. (2017) for a thorough review).", "Relevant NLP research.", "Considerable NLP work has addressed the challenge of metaphor detection and understanding (Narayanan, 1999; Shutova et al., 2010, 2013; Shutova, 2015).", "This work has made use of both statistical, bottom-up approaches to language modeling (Gutirrez et al., 2016; Shutova et al., 2013), as well as knowledge bases such as MetaNet (Dodge et al., 2015; Stickles et al., 2014; David and Dancygier, 2017).", "The selective review of construal dimensions presented here is intended to be illustrative, not exhaustive or definitive.", "Returning to the visual analogy, we can see these dimensions as primarily concerned with how (and what part of) a conceptual scene is perceived ( PERSPECTIVE , PROMINENCE ); the choice or categorization of which schematic structures are present ( CONFIGURATION and METAPHOR ); or both ( RESOLUTION ).", "We have omitted another high-level categorization dimension, SCHEMATIZATION , which includes concepts related to force dynamics, image schemas, and other experientially grounded schemas well discussed in the literature (Talmy, 2000).", "We have also not addressed pragmatic inference related to politeness (Brown and Levinson, 1987), indirect requests (Clark, 1979), and other aspects of communicative intent.", "Additionally, some phenomena are challenging to categorize within the dimensions listed here; a more complete analysis would include evidentality (Chafe and Nichols, 1986), modality (Mortelmans, 2007), light verb constructions (Wit-tenberg and Levy, 2017; Wittenberg et al., 2014), and more.", "Nonetheless, we hope this partial taxonomy provides a helpful entry point to relevant prior work and starting point for further alignment.", "How might construal work in practice?", "We have emphasized so far the flexibility afforded by the dimensions in 3.", "But we must also explain why some words and concepts make easier bedfellows than others.", "This section presents a thumbnail sketch of how the construal process copes with apparent mismatches, where it is the collective constraints of the input structures that guide the search for coherence.", "We focus on comprehension (similar processes apply in production), and assume some mechanism for proposing interpretations consisting of a set of conceptual structures and associated compatibility constraints .", "Compatibility constraints are analogous to various kinds of binding constraints proposed in the literature (variable binding, role-filler bindings, unification bindings, and the like): they are indicators that two structures should be conceptualized as a single unit.", "But compatibility is softer and more permissive than identity or type-compatibility, in that it can also be satisfied with the help of construal operations .", "Some operations effect relatively subtle shifts in meaning; others have more dramatic effects, including changes to truth-conditional aspects of meaning.", "Below we illustrate how some example linguistic phenomena fit into the sketch just presented and mention connections to prior lines of work.", "Count/mass coercion.", "English nouns are flexible in their count / mass status (see 3.4).", "Atypical marking for number or definiteness can cause a shift, or coercion , in boundedness: plural or in-definite marking on mass nouns ( a lemonade , two lemonades ) yields a bounded interpretation (cups or bottles of lemonade).", "Conversely, count nouns with no determiner are coerced to an undifferentiated mass, via a phenomenon known as grinding (there was mosquito all over the windshield) (Pelletier and Schubert, 1989, 2003; Copestake and Briscoe, 1995).", "Here we see evidence of the outsize influence of tiny grammatical markers on manipulating lexical defaults in the construal process.", "Aspectual composition.", "Aspect is a prime arena for studying how multiple factors conspire to shape event construal.", "Verbs are associated with default aspectual classes that can be coerced under pressure from conflicting cues, where details of event structure systematically constrain possible coercions and their inferential consequences (Moens and Steedman, 1988; Talmy, 1988).", "In fact, aspectual coercion can be reanalyzed in terms of construal dimensions.", "For example, durative modifiers (e.g. for an hour ) prefer to combine with atelic processes (lacking a defined endpoint, as in 3a) on which to impose a bound (analogous to count/mass coercion) and duration.", "Combination with any other aspectual class triggers different operations to satisfy that preference: (3)", "a. He {slept / ran} for an hour.", "b. He sneezed for an hour.", "c. He read the book for an hour.", "d. He left for an hour.", "A single sneeze, being a discrete event unlikely to last an hour, undergoes ITERATION into a series of sneezes (3b), illustrating a change in plexity (3.4); while the book-reading in in (3c) is simply viewed as unfinished (cf. He read the book).", "The departure in (3d) is a discrete event, but unlike sneezing, it also results in a state change that is reversible and therefore boundable (cf. the iterative reading of He broke the glass for an hour, the non-permanent reading of 2c).", "Its coercion thus features multiple operations: a PROMINENCE shift to profile the result state of being gone; and then a BOUNDING that also reverses state, implying a return (Chang et al., 1998).", "Constructional coercion.", "The flagship example cited in the construction grammar literature (4a) has also been analyzed as a kind of coercion, serving to resolve conflicts between lexical and grammatical meaning (Goldberg, 1995, 2019): (4)", "Here, the verb sneeze , though not typically transitive or causal, appears in a Caused Motion argument structure construction, which pairs oblique-transitive syntax with a caused motion scene.", "The resulting conflict between its conventional meaning and its putative causal role is resolvable, however, by a commonsense inference that sneezing expels air, which can plausibly cause the napkin's motion (cf. Forbes and Choi, 2017).", "This coercion, also described as role fusion , differs from the previous examples in manipulating the PROMINENCE of a latent component of meaning.", "Coercion doesn't always succeed, however: presumably sneezing could only move a boulder with contextual support, and sleeping has a less plausibly forceful reading.", "In fact, construal depends on the interaction of many factors, including degree of conventionality (where push and blow are prototypical caused motion verbs), embodied and world knowledge (the relative forces of sneeze and sleep to napkin weight), and context.", "5 There is extensive psycholinguistic evidence of constructional coercion and the many factors influencing ease of construal (see Goldberg (2003, 2019) for reviews).", "Some of these phenomena have been analyzed within computational implementations of construction grammar (Bergen and Chang, 2005; Bryant, 2008; Bergen and Chang, 2013; Dodge and Petruck, 2014; Steels, 2017; Steels and Feldman, 2017; Matos et al., 2017), and have also been incorporated in corpus annotation schemes (Bonial et al., 2011; Hwang et al., 2014; Lyngfelt et al., 2018).", "5 A related theory is Dowty's (1991) semantic proto-roles account, which links the grammatical subject/object asymmetry to two clusters of semantic features that are more agent-like (e.g., animacy) or patient-like (e.g., affectedness), respectively; associations between these proto-roles and grammatical subjects and objects are attested in comprehension (Kako, 2006; Pyykknen et al., 2010) and have been investigated computationally (Reisinger et al., 2015; Rudinger et al., 2018).", "that trigger construal operations.", "A possible analysis of tiny iron grip from (1) illustrates both.", "First, the modifiers tiny and iron expect a physical entity, but grip is a (nominalized) action.", "This conflict triggers a profile shift ( PROMINENCE ) to the grip's effector (a hand), effectively licensing a metonymy.", "A further conflict arises between the hand and its description as iron (unlikely to be literal unless the protagonist is of robotic lineage).", "A structural alignment ( METAPHOR ) then maps the iron's strength to the grip's force, which in turn maps to the degree of dictatorial control.", "6 We observe that multiple construal operations can occur in sequence; that a conceptual or linguistic element may afford more than one construal within the same analysis ( grip as both a hand and metaphorical control); and that aspects of common sense, world knowledge, and culture (though not the focus of the present work) inevitably constrain construal options.", "We turn to a few illustrations of how the pervasive effects of construal can arise in applied settings.", "Even simple tasks like rescheduling a meeting pose many challenges to dialogue systems, in both understanding users' intents and formulating natural responses.", "Consider the following exchange: U -1: When is my 1-1 with Chuck?", "A -2: 4 PM today, in 15 minutes.", "U -3: Is there another slot soon?", "A -4: Not today, should I check tomorrow?", "U -5: Let's push it to his tomorrow evening.", "A -6: Rescheduled 1-1 with Chuck for 2 PM tomorrow, 6 PM in Brazil.", "The agent's first response (A-2) demonstrates sensitivity to PERSPECTIVE by providing a relative time.", "Interpreting another slot soon in the user's follow-up (U-3) requires both understanding that another is implicitly defined in contrast to the existing slot (relying on PROMINENCE ) and then inferring the appropriate RESOLUTION meant by soon (on the scale of hours, rather than minutes or seconds).", "The agent's succinct response in (A-4) exploits PROMINENCE yet again, both by eliding reference to the sought-after open meeting slot with 6 Alternatively, iron grip could be treated as an entrenched idiom with a readily accessible construal that tiny can modify.", "Chuck, and by using tomorrow (the direct object of check) as a metonymic shorthand for the joint constraints of the user's and Chuck's calendars.", "The next user turn (U-5) employs METAPHOR in its construal of an event as a physical object, capable of being pushed.", "The metaphorical destination (his tomorrow evening) requires consideration of differing time zones ( PERSPECTIVE ), as made explicit in the final agent turn (A-6).", "Interactions between situational context and the kinds of compatibility constraints discussed in 4 can also affect a dialogue system's best response.", "A user asking a fitness tracking app How long have I been running? while panting around a track may be referring to the current run, but the same question asked while sitting at home is more likely wondering how long they've been habitually running.", "A successful response requires the integration of the constraints from (at least): the verb running , whose progressive marking is associated with ongoing processes, but ambiguous between a single run and a series of runs ( CONFIGURATION ); the present-perfect have been V-ing , which implies an internal view ( PERSPECTIVE ); and the situational context (is the user currently running?).", "Situated interactions between humans and robots require the integration of language with other modalities (e.g., visual or haptic).", "7 Clearly, any spatially grounded referring expressions must be tailored to the interlocutors' PERSPECTIVE (whether shared or not) (Kunze et al., 2017).", "Focus of attention ( PROMINENCE ) is especially important for systems that must interpret procedural language.", "Recipes, for example, are notoriously telegraphic, with rampant omissions of information that a human cook could easily infer in context (Ruppenhofer and Michaelis, 2010; Malmaud et al., 2014).", "Consider (5): (5) In a medium bowl, cream together the sugar and butter.", "The italicized words provide crucial constraints that would help a cook (human or robot) track the evolving spatial relations.", "The first in establishes 7 Indeed, the needs of human-robot interaction have motivated extensions to Abstract Meaning Representation (Banarescu et al., 2013) beyond predicate-argument structure and entities to capture tense and aspect, spatial information, and speech acts (Bonial et al., 2019).", "the bowl as the reference point for the creaming action, whose resultthe mixture of sugar and butter together becomes the implicit landmark for the subsequent beating in of eggs and vanilla.", "Systems following instructions also require a means of segmenting continuous sensorimotor data and linking it to discrete linguistic categories (Reg-neri et al., 2013; Yagcioglu et al., 2018) (cf.", "the symbol grounding problem (Harnad, 1990)).", "This mapping may depend on flexibly adjusting RESOLUTION and CONFIGURATION based on linguistic cues (e.g., cut/dice/slice/sliver the apple ).", "Despite many advances, paraphrase generation systems remain far from human performance.", "One vexing issue is the lack of evaluation metrics that correlate with human judgments for tasks like paraphrase, image captioning, and textual entailment (see, e.g., Bhagat and Hovy, 2013; Pavlick and Kwiatkowski, 2019; Wang et al., 2019b).", "In particular, it is unclear how closely a good paraphrase should hew to all aspects of the source sentence.", "For example, should active/passive descriptions of the same scene, or the sets of sentences in (2), be considered meaning-equivalent?", "Or take the putative paraphrase below: (6)", "These could plausibly describe the same scene; should their differences across multiple dimensions ( PERSPECTIVE , PROMINENCE , RESOLUTION ) be rewarded or penalized for this diversity?", "A first step out of this quandary is to recognize construal dimensions and operations as a source of linguistic variability.", "Paraphrase generation and other semantically oriented tasks could incorporate these into system design and evaluation in task-specific ways.", "Throughout this paper, we have emphasized the flexible and multivalent nature of linguistic meaning, as evidenced by the construal phenomena described here.", "The effects of construal are ubiquitous: from conventional to creative language use, through morphemes and metaphors.", "Indeed, even the smallest forms can, like tiny tyrants, exert a transformative force on their surroundings, inducing anything from a subtle shift in emphasis to a radical reconceptualization.", "As illustrated in 5, this flexibility of language use poses a challenge for NLP practitioners.", "Yet cruciallyand fortunatelyconstrual is not random: variations in linguistic form correspond systematically to differences in construal .", "The dimensions of construal and their associated operations (3 and 4) offer principled constraints that render the search for coherence more tractable.", "How, then, should we proceed?", "Our goal is for construal dimensions such as those highlighted in 3 to be incorporated into any research program aspiring to human-level linguistic behavior.", "Below, we describe several concrete recommendations for how to do this.", "More meaningful metrics.", "Taking construal seriously means rethinking how NLP tasks are designed and evaluated.", "Construal dimensions can provide a rubric for assessing tasks, datasets, and meaning representations (Abend and Rappoport, 2017) for which meaningful distinctions they make or require.", "(E.g.: Does it capture the level of RESOLUTION at which entities and events are described? Does it represent METAPHOR ? Is it sensitive to the PROMINENCE of different event participants?)", "Such questions might also help guard against unintended biases like those recently found in NLP evaluations and systems (e.g., Caliskan et al., 2017; Gururangan et al., 2018).", "Popular NLU benchmarks (like SuperGLUE; Wang et al., 2019a) should be critically examined for potential construal biases , and contrasts should be introduced deliberately to probe whether systems are modeling lexical choices, grammatical choices, and meaning in the desired way (Naik et al., 2018; Kaushik et al., 2020; McCoy et al., 2019; Gardner et al., 2020).", "As a broader suggestion, datasets should move away from a one-size-fits-all attitude based on gold annotations.", "Ideally, evaluation metrics should take into account not only partial structure matches, but also similarity to alternate construals.", "Cognitive connections.", "The many connections between construal and the rest of cognition highlight the need for further interdisciplinary engagements in the study of construal.", "The psycholinguistics literature is a particularly rich source of construal-related data and human language benchmarks.", "Psycholinguistic data could also be used to probe neural language models (Futrell et al., 2018; Linzen and Leonard, 2018; van Schijndel and Linzen, 2018; Ettinger, 2020).", "How well do such models capture the phenomena reviewed in 3, and where do they fall short?", "A fuller account of the constellation of factors involved in construal should also take seriously the grounded, situated nature of language use (Har-nad, 1990; Kiros et al., 2018; Bender and Koller, 2020; Bisk et al., 2020).", "Frameworks motivated by the linguistic insights mentioned in 2 (such as the work on computational construction grammar referenced in 4) and by growing evidence of embodied simulations as the basis for meaning (Narayanan, 1999; Bergen and Chang, 2005; Feldman, 2006; Bergen, 2012; Tamari et al., 2020) are especially relevant lines of inquiry.", "Much work remains to flesh out the construal dimensions, operations and phenomena preliminarily identified in 3 and 4, especially in connecting to typological, sociolinguistic, developmental, and neural constraints on conceptualization.", "We believe a concerted effort across the language sciences would provide valuable guidance for developing better NL systems and resources.", "As the saying goes, the camera doesn't liebut it may tell us only a version of the truth.", "The same goes for language.", "Some of the phenomena we have described may seem, at first glance, either too subtle to bother with or too daunting to tackle.", "But we believe it is both timely and necessary, as language technologies grow in scope and prominence, to seek a more robust treatment of meaning.", "We hope that a deeper appreciation of the role of construal in language use will spur progress toward systems that more closely approximate human linguistic intelligence.", "We are grateful to Lucia Donatelli, Nick Hay, Aurelie Herbelot, Jena Hwang, Jakob Prange, Susanne Riehemann, Hannah Rohde, Rachel Rudinger, and anonymous reviewers for many helpful suggestions; and to the ACL 2020 organizers for planning a special theme, Taking Stock of Where We've Been and Where We're Going .", "Special thanks to Nora Chang-Hay for finally relaxing her tiny iron grip.", "This research was supported in part by NSF award IIS-1812778.", "The FrameNet Brasil Lab is funded by CAPES grants 88887.125411/2016-00 and 88887.144043/2017-00." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "abstain", "objective", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "other", "other", "other", "other" ]
[ "This paper presents a novel method for nested named entity recognition.", "As a layered method, our method extends the prior second-best path recognition method by explicitly excluding the influence of the best path.", "Our method maintains a set of hidden states at each time step and selectively leverages them to build a different potential function for recognition at each level.", "In addition, we demonstrate that recognizing innermost entities first results in better performance than the conventional outermost entities first scheme.", "We provide extensive experimental results on ACE2004, ACE2005, and GENIA datasets to show the effectiveness and efficiency of our proposed method.", "Named entity recognition (NER), as a key technique in natural language processing, aims at detecting entities and assigning semantic category labels to them.", "Early research (Huang et al., 2015; Ma and Hovy, 2016; Lample et al., 2016) proposed to employ deep learning methods and obtained significant performance improvements.", "However, most of them assume that the entities are not nested within other entities, so-called flat NER.", "Inherently, these methods do not work satisfactorily when nested entities exist.", "Figure 1 displays an example of the nested NER task.", "Recently, a large number of papers proposed novel methods (Fisher and Vlachos, 2019; Wang et al., 2020) for the nested NER task.", "Among them, layered methods solve this task through multi-level sequential labeling, in which entities are divided into several levels, where the term level indicates the depth of entity nesting, and sequential labeling is performed repeatedly.", "As a special case of layered method, Shibuya and Hovy (2020) force the This work was done when the first author was at NAIST.", "Former Hogwarts headmaster Dumbledore Albus ROLE ROLE ORG ROLE PER PER Figure 1: An example of nested NER.", "next level entities to locate on the second-best path of the current level search space.", "Hence, their algorithm can repeatedly detect inner entities through applying a conventional conditional random field (CRF) (Lafferty et al., 2001) and then exclude the obtained best paths from the search space.", "To accelerate computation, they also designed an algorithm to efficiently compute the partition function with the best path excluded.", "Moreover, because they search the outermost entities first, performing the second-best path search only on the spans of extracted entities is sufficient, since inner entities can only exist within outer entities.", "However, we claim that the target path at the next level is neither necessary nor likely to be the second-best path at the current level.", "Instead, those paths sharing many overlapping labels with the current best path are likely to be the second-best path.", "Besides, Shibuya and Hovy (2020) reuse the same potential function at all higher levels.", "Thus, even though they exclude the best path, the influence of the best path is still preserved, since the emission scores of labels on the best path are used in the next level recognition.", "Moreover, these best path labels are treated as the target labels at the current level.", "However, if they are not on the best path of the next level, they will be treated as non-target labels at the next level, hence these adversarial optimization goals eventually hurt performance.", "In this paper, we use a different potential function at each level to solve this issue.", "We propose to achieve this by introducing an encoder that produces a set of hidden states at each time step.", "At each level, we select some hidden states for entity recognition, then, remove these hidden states which have interaction with the best path labels before moving to the next level.", "In this way, the emission scores of these best path labels are completely different, so we can explicitly exclude the influence of the best path.", "Furthermore, we also propose three different selection strategies for fully leveraging information among hidden states.", "Besides, Shibuya and Hovy (2020) proposed to recognize entities from outermost to inner.", "We empirically demonstrate that extracting the innermost entities first results in better performance.", "This may due to the fact that some long entities do not contain any inner entity, so using outermost-first encoding mixes these entities with other short entities at the same levels, therefore leading encoder representations to be dislocated.", "In this paper, we convert entities to the IOBES encoding scheme (Ramshaw and Marcus, 1995), and solve nested NER through applying CRF level by level.", "Our contributions are considered as fourfold,", "(a) we design a novel nested NER algorithm to explicitly exclude the influence of the best path through using a different potential function at each level,", "(b) we propose three different selection strategies for fully utilizing information among hidden states,", "(c) we empirically demonstrate that recognizing entities from innermost to outer results in better performance,", "(d) and we provide extensive experimental results to demonstrate the effectiveness and efficiency of our proposed method on the ACE2004, ACE2005, and GENIA datasets.", "Named entities recognition task aims to recognize entities in a given sequence { x t } n t =1 .", "For nested NER some shorter entities may be nested within longer entities, while for flat NER there is no such case.", "Existing algorithms solve flat NER by applying a sequential labeling method, which assigns each token a label y t Y to determine the span and category of each entity and non-entity simultaneously.", "To solve nested NER, we follow the previous layered method and extend this sequential labeling method with a multi-level encoding scheme.", "In this encoding scheme, entities are divided into several levels according to their depths, we apply the sequential labeling method level by level to recognize all entities.", "Shibuya and Hovy (2020) proposed to recognize the outermost entities first and recursively detect the nested inner entities.", "However, we find that detecting from the innermost entities results in better performance.", "We take the sentence in Figure 1 as an example to illustrate the details of these two encoding schemes.", "The results of the outermost-first encoding scheme look as follows.", "Labels B, I, Eindicate the current word is the beginning, the intermediate, and the end of an entity, respectively.", "Label Smeans this is a single word entity, and label O stands for nonentity word.", "For example, the outermost entity Former Hogwarts headmaster Albus Dumbledore appears at the first level, while innermost entities Hogwarts and headmaster appear at the fourth level.", "Since there exists no deeper nested entity, the remaining levels contain only label O .", "In contrast, the innermost-first encoding scheme converts the same example to the following label sequences.", "In this encoding scheme, innermost entities Hogwarts, headmaster, and Albus Dumbledore appear at the first level.", "Note that the innermost-first encoding scheme is not the simple reverse of the outermost-first encoding scheme.", "For example, the entity Former Hogwarts head-master and the entity Albus Dumbledore appear at the same level in the outermost-first scheme but they appear at different levels in the innermost-first scheme.", "Although the second-best path searching algorithm is proposed as the main contribution of Shibuya and Hovy (2020), we claim that forcing the target path at the next level to be the second-best path at", "the current level is not optimal.", "As the innermost-first encoding example above, the best path at level 3 is B-ROLE,I-ROLE,E-ROLE,O,O .", "Therefore the second-best path is more likely to be one of those paths that share as many as possible labels with the best path, e.g., B-ROLE,I-ROLE,E-ROLE,O,S-ORG , rather than the actual target label sequence at level 4, i.e., B-PER,I-PER,I-PER,I-PER,E-PER , which does not overlap with the best path at all.", "In addition, Shibuya and Hovy (2020) reuse the same potential function at all higher levels.", "This indicates that, for instance, at level 3 and time step 1, their model encourages the dot product of the hidden state and the label embedding h (cid:62) 1 v B-ROLE to be larger than h (cid:62) 1 v B-PER , while at level 4, the remaining influence of the best path reversely forces h (cid:62) 1 v B-PER to be larger than h (cid:62) 1 v B-ROLE .", "These adversarial optimization goals eventually hurt performance and result in sub-optimal performance.", "Therefore, the crux of the matter is to introduce different emission scores for different levels.", "For example, encouraging h 3 (cid:62) 1 v B-ROLE > h 3 (cid:62) 1 v B-PER at level 3 and encouraging h 4 (cid:62) 1 v B-PER > h 4 (cid:62) 1 v B-ROLE at level 4 will not lead to adversarial optimization directions anymore, where h 31 and h 41 are two distinctive hidden states to be used at levels 3 and 4, respectively.", "To achieve this goal, we introduce a novel encoder which outputs m hidden states { h lt } ml =1 , where m is the number of levels, as an alternative to the conventional encoder which can only output a single hidden state h t R d h at each time step.", "To make a distinction between our m hidden states and the conventional single hidden state, we use the term chunk from now on to refer to these hidden states h lt R d h /m .", "We restrict chunk dimension to be d h /m , so the total number of parameters remain unchanged.", "As we mentioned above, our algorithm maintains a chunk set for each time step, through selecting and removing chunks, to exclude the influence of the best path.", "Naturally, how to select chunk becomes the next detail to be finalized.", "For clarity, we use notation H lt to denote the chunk set at level l , and use H l to refer to all of these chunk sets at level m across time steps, i.e., {H lt } nt =1 .", "Because we remove one and only one chunk at each time step, |H lt | + l = m + 1 always holds.", "An intuitive idea is to follow the original chunk order and simply to select the l -th chunk for level l .", "At level l , no matter to which label, the emission score is calculated by using h lt .", "In this way, this naive potential function can be defined as follow, ( y lt 1 , y lt , H lt ) = A y lt 1 ,y lt + h l (cid:62) t v y lt (1) where A R |Y||Y| is the transition matrix, Y is the label set, A y l t 1 ,y l t indicates the transition score from label y lt 1 to label y lt , and v y lt R d h /m is the embedding of label y lt .", "In this case, the l -th chunk h lt H lt is just the chunk which have an interaction with target label, thus should be removed from H lt .", "One concern of the naive potential function is that it implicitly assumes the outputs of the encoder are automatically arranged in the level order instead of other particular syntactic or semantic order, e.g., the encoder may encodes all LOC related information at the first h d /m dimensions while remaining", "ORG relevant information to the final h d /m dimension.", "For instance, at level 3 time step 1, naive potential function forces h 3 (cid:62) 1 v B-ROLE > h 3 (cid:62) 1 v B-PER .", "But if there exists another chunk, say h 51 , which is more similar to v B-PER , then directly selecting h 5 1 and forcing h 3 (cid:62) 1 v B-ROLE > h 5 (cid:62) 1 v B-PER is more reasonable.", "Because it makes training harder than the former one, due to h 5 (cid:62) 1 v B-PER > h 3 (cid:62) 1 v B-PER .", "In other words, this selection strategy leads to h 1 (cid:62) t v y 1 t > h 2 (cid:62) t v y 2 t > . . . > h m (cid:62) t v y mt , where l is the index of selected chunk at level l , but for naive potential function, the inequation above does not always hold.", "From this aspect, our method can also be considered as selecting the best path in the second-best search space.", "Therefore, instead of following the original chunk orders, we propose to let each label y j select the most similar chunk to it to obtain an emission score.", "We denote this definition as max potential function, ( y lt 1 , y lt , H lt ) = A y lt 1 ,y lt + max h H lt h (cid:62) v y lt (3) In this case, we update chunk sets by removing these chunks which are selected by the target labels.", "Furthermore, since the log-sum-exp operation is a well known differentiable approximation of the max operation, we also introduce it as the third potential function,", "The chunk set is updated in the same way as Equation 4.", "We refer to this potential function definition as logsumexp in the rest of this paper.", "Following previous work (Shibuya and Hovy, 2020), we convert words to word embeddings w t R d w and employ a character-level bidirectional LSTM to obtain character-based word embeddings c t R d c .", "The concatenation of them is fed into the encoding layer as the token representation x t = [ w t , c t ] R d x .", "We employ a three-layered bidirectional LSTM to encode sentences and leverage contextual information, { h } nt = LSTM ( { x } nt ) (6)", "where h t R d h is the hidden state.", "In contrast to the encoders of previous work, which can only output single hidden states at each time step, we split h t into m chunks, [ h 1 t , . . . , h mt ] = h t (7) where h jt R d h /m , and use them as the first level chunk set, i.e., H 1 t = { h jt } mj =1 , to start recognition.", "At each level, we run a shared conventional CRF with its corresponding potential function ( y lt 1 , y lt , H lt ) and update the chunk sets until finishing all m levels.", "On the training stage, we remove chunks according to the selections of the target labels, while on the decoding stage, it depends on the selections of the predicted labels.", "Following the definition of CRF, the conditional probabilistic function of a given label sequence at l -th level, i.e., y l = { y lt } nt =1 , can be defined as,", "p ( y l | H l ) = 1 Z ( H l ) exp n (cid:88) t =1 ( y lt 1 , y lt , H lt ) (8) Z ( H l ) = (cid:88) y (cid:48) Y n exp n (cid:88) t =1 ( y (cid:48) lt 1 , y (cid:48) lt , H lt ) (9)", "L = m (cid:88) l =1 log p ( y l | H l ) (10) On the decoding stage, we iteratively apply the Viterbi algorithm (Forney, 1973) at each level to search the most probable label sequences.", "The pseudocodes of the training and the decoding algorithms with max or logsumexp potential function can be found in Algorithms 1 and 2, respectively.", "We conduct experiments on three nested named entity recognition datasets in English, i.e., ACE2004 (Doddington et al., 2004), ACE2005 (Walker et al., 2006) and GENIA (Kim et al., 2003).", "We divide all these datasets into tran/dev/test split by following Shibuya and Hovy (2020) and Wang et al. (2020).", "The dataset statistics can be found in Table 1.", "For word embeddings initialization, we utilize 100-dimensional pre-trained GloVe (Pennington et al., 2014) for the ACE2004 and the ACE2005 datasets, and use 200-dimensional biomedical domain word embeddings 1 (Chiu et al., 2016) for the GENIA dataset.", "Moreover, we randomly initialize 30-dimensional vectors for character embeddings.", "The hidden state dimension of character-level LSTM d c is 100, i.e., 50 in each direction, thus the dimension of token representation d x is 200.", "We apply dropout (Srivastava et al., 2014) on token representations before feeding it into the encoder.", "The hidden state dimension of the three-layered LSTM is 600 for ACE2004 and ACE2005, i.e., 300 in each direction, and 400 for GENIA.", "Choosing a different dimension is because the maximal depth of entity nesting m is different.", "We apply layer normalization (Ba et al., 2016) and dropout with 0.5 ratio after each bidirectional LSTM layer.", "Different from Shibuya and Hovy (2020), we use only one CRF instead of employing different CRFs for different entity types.", "Besides, our CRF is also shared across levels, which means we learn and decode entities at all levels with the same CRF.", "Our model is optimized by using stochastic gradient descent (SGD), with a decaying learning rate = 0 / (1 + ) , where is the index of the current epoch.", "For ACE2004, ACE2005, and GENIA, the initial learning rates 0 are 0 .", "2 , 0 .", "2 , and 0 .", "1 , and the decay rates are 0 .", "01 , 0 .", "02 , and 0 .", "02 respectively.", "We set the weight decay rate, the mo-mentum, the batch size, and the number of epochs to be 10 8 , 0 .", "5 , 32 , and 100 respectively, especially we use batch size 64 on the GENIA dataset.", "We clip the gradient exceeding 5.", "Besides, we also conduct experiments to evaluate the performance of our model with contextual word representations.", "BERT (Devlin et al., 2019) and Flair (Akbik et al., 2018) are the most commonly used contextual word representations in previous work, and have also been proved that they can substantially improve the model performance.", "In these settings, contextual word representations are concatenated with word and character representations to form the token representations, i.e., x t = [ w t , c t , e t ] , where e t is the contextual word representation and it is not fine-tuned in any of our experiments.", "BERT and Flair contextual word representations respectively.", "Bold and underlined numbers indicates the best and the second-best results respectively.", "naive , max , and logsumexp refer to the three potential function definitions, i.e., Equations 1, 3, and 5, respectively.", "These numbers in parentheses are standard deviations.", "BERT is a transformer-based (Vaswani et al., 2017) pre-trained contextual word representation.", "In our experiments, for the ACE2004 and ACE2005 datasets we use the general domain checkpoint bert-large-uncased , and for the GENIA dataset we use the biomedical domain checkpoint BioBERT large v1.1 2 (Lee et al., 2019).", "We average all BERT subword embeddings in the last four layers to build 1024-dimensional vectors.", "Flair is a character-level BiLSTM-based pre-trained contextual word representation.", "We concatenate these vectors obtained from the news-forward and news-backward checkpoints for ACE2004 and ACE2005, and use the pubmed-forward and pubmed-backward checkpoints for GENIA, to build 4096-dimensional vectors.", "Experiments are all evaluated by precision, recall, and F 1 .", "All of our experiments were run 4 times 2 https://github.com/naver/ biobert-pretrained with different random seeds and averaged scores are reported in the following tables.", "Our model 3 is implemented with PyTorch (Paszke et al., 2019) and we run experiments on GeForce GTX 1080Ti with 11 GB memory.", "Table 2 shows the performance of previous work and our model on the ACE2004, ACE2005, and GENIA datasets.", "Our model substantially outperforms most of the previous work, especially when comparing with our baseline Shibuya and Hovy (2020).", "When using only word embeddings and character-based word embeddings our method exceeds theirs by 2.64 F 1 score, and also achieves comparable results with the recent competitive method (Wang et al., 2020).", "In the case of utilizing BERT and further employing Flair, our method consistently outperforms Shibuya and Hovy (2020) by 1.09 and 0.60 by F 1 scores, respectively.", "On the ACE2005 dataset, our method improves the F 1 scores by 1.98, 0.72, and 0.59 respectively, comparing with Shibuya and Hovy (2020).", "Although our model performance is inferior to Wang 3 https://github.com/speedcell4/nersted et al. (2020) at general, our max potential function method is slightly superior to them by 0.05 in F 1 score when employing BERT.", "Furthermore, on the biomedical domain dataset GENIA, our method constantly outperforms Shibuya and Hovy (2020) by 0.18, 1.62, and 1.57 in F 1 score, respectively.", "Although the low scores of Shibuya and Hovy (2020) are due to their usage of the general domain checkpoint bert-large-uncased , instead of our biomedical domain checkpoint, our model is still superior to Strakova et al. (2019) by 0.47 and 0.62 in F 1 scores, who used the same checkpoint as us.", "As for these three potential functions, we notice the max and logsumexp potential functions generally works better than the naive potential function.", "These results demonstrate that the chunk selection strategy of the max and logsumexp can leverage information from all remaining chunks and constrains hidden states of LSTM to be more semantically ordered.", "When we use BERT and Flair, the advantage of the max and the logsumexp potential function is less obvious compared with the case when we only use word embeddings and character-based word embeddings, especially on the GENIA dataset.", "We hypothesize that BERT and Flair can provide rich contextual information, then selecting chunks in the original order is sufficient, thus our dynamic selecting mechanism can only slightly improve the model performance.", "We also conduct experiments on the ACE2004 dataset to measure the influence of the outermost-first and innermost-first encoding schemes.", "As shown in Table 3, the innermost-first encoding scheme consistently works better than the outermost-first encoding scheme with all potential functions.", "We hypothesize that outermost entities do not necessarily contain inner entities especially for longer ones, and that putting those diversely Encoding Scheme P R F 1 Outermost First naive 79.08 76.57 77.80 (0.26) max 79.07 75.11 77.04 (0.20) logsumexp 79.05 76.39 77.70 (0.32) Innermost First naive 81.12 77.71 79.38 (0.31) max 81.90 78.05 79.92 (0.10) logsumexp 81.24 78.96 80.08 (0.22) Table 3: Influence of the two encoding schemes and the three potential functions.", "nested outermost entities at the same level would dislocate the encoding representation.", "Furthermore, even if we use the outermost-first encoding scheme, our method is superior to Shibuya and Hovy (2020), which further demonstrates the effectiveness of excluding the influence of the best path.", "The time complexity of encoder is O ( n ) , and because we employ the same tree reduction acceleration trick 4 as Rush (2020), the time complexity of CRF is reduced to O (log n ) , therefore the overall time complexity is O ( n + m log n ) .", "Even our model outperforms slightly worse than Wang et al. (2020), the training and inference speed of our model is much faster than them, as shown in Table 4, since we do not need to stack the decoding component to 16 layers.", "Especially, when we increase the batch size to 64, the decoding speed is more than two times faster than their model.", "We display the performance on the dataset ACE2005 at each level, as in Table 5.", "The max potential function at the first three levels achieves constantly higher precision scores than the naive and logsumexp potential functions, while at the same time obtains the lowest recall scores.", "The logsumexp potential function on the contrary achieves the highest recall scores but fails to obtain satisfactory precision scores.", "Because most entities are located at the first two levels, the max and logsumexp achieves the best overall precision and recall scores, respectively.", "We analyze the chunk distribution on the test split of the dataset ACE2005 by plotting the heat maps", "in Figure 3, in which these numbers indicate the percentages of each chunk being selected by a particular level or label.", "For example, the 35 at the upper-right corner means when using logsumexp potential function, 35% of predictions at the first level are made by choosing the sixth chunk, while the 78 at the lower-left corner shows 78% of WEA are related to the first chunk with naive .", "To make it easier to compare with the naive , we arranged the chunk orders of max and logsumexp , without losing generality, to make the level-chunk distribution mainly concentrate on the diagonal.", "just diagonal.", "At the first level, the logsumexp potential function also prefers to select the sixth and the fourth chunks rather than the first chunk, we hypothesis this is due to most of Band Slabels are located on the first level, and this can be con-firmed according to the syntactic-chunk heat map of logsumexp where 78% Band 70% Slabels go to the sixth and fourth chunks.", "Similarly, max also has a high probability to select the second chunk.", "Generally, the chunk distribution of logsumexp is more smooth than max .", "Besides, we find label O almost uniformly select chunks, in both the syntactic and semantic heat maps, while other meaningful labels have their distinguished preferences.", "Syntactic labels Sand Bmainly represent the beginning of an entity, while Iand Estands for the continuation and ending of an entity.", "In the syntactic-chunk heat map of naive , they are indiscriminately distributed to the first chunk, because most of the entities are located on the first level.", "However, max and logsumexp utilize different chunks to represents these different syntactic categories.", "Likewise, the semantic label GPE , when using logsumexp , also has a 61% probability to select the sixth chunks other than concentrating on the first chunk as naive .", "These observations further demonstrate our dynamic chunk selection strategies are capable of learning more meaningful representations.", "Existing NER algorithms commonly employ various neural networks to leverage more morphological and contextual information to improve performance.", "For example, to handle the out-of-vocabulary issue through introducing morphological features, Huang et al. (2015) proposed to employ manual spelling feature, while Ma and Hovy (2016) and Lample et al. (2016) suggested introducing CNN and LSTM to build word representations from character-level.", "Zhang et al. (2018) and Chen et al. (2019) introduced global representation to enhance encoder capability of encoding contextual information.", "Layered Model As a layered model, Ju et al. (2018) dynamically update span-level representations for next layer recognition according to recognized inner entities.", "Fisher and Vlachos (2019) proposed a merge and label method to enhance this idea further.", "Recently, Shibuya and Hovy (2020) designed a novel algorithm to efficiently learn and decode the second-best path on the span of detected entities.", "Luo and Zhao (2020) build two different graphs, one is the original token sequence, and the other is the tokens in recognized entities, to model the interaction among them.", "Wang et al. (2020) proposed to learn the l -gram representations at layer l through applying a decoder component to reduce a sentence layer by layer and to directly classify these l -gram spans.", "Region-based Model Lin et al. (2019) proposed an anchor-region network to recognize nested entities through detecting anchor words and entity boundaries first, and then classify each detected span.", "Exhaustive models simply enumerate all possible spans and utilize a maximum entropy tagger (Byrne, 2007) and neural networks (Xu et al., 2017; Sohrab and Miwa, 2018; Zheng et al., 2019) for classification.", "Luan et al. (2019) additionally aims to consider the relationship among entities and proposed a novel method to jointly learn both entities and relations.", "Hypergraph-based Model Lu and Roth (2015) proposed a hyper-graph structure, in which edges are connected to multiple nodes to represents nested entities.", "Muis and Lu (2017) and Wang and Lu (2018) resolved spurious structures and ambiguous issue of hyper-graph structure.", "And Katiyar and Cardie (2018) proposed another kind of hyper-graph structure.", "Parsing-based Model Finkel and Manning (2009) indicated all these nested entities are located in some non-terminal nodes of the constituency parses of the original sentences, thus they proposed to use a CRF-based constituency parser to obtain them.", "However, the cubic time complexity limits its applicability.", "Wang et al. (2018) instead proposed to use a transition-based constituency parser to incrementally build constituency forest, its linear time complexity ensures it can handle longer sentences.", "In this paper, we proposed a simple and effective method for nested named entity recognition by explicitly excluding the influence of the best path through selecting and removing chunks at each level to build different potential functions.", "We also proposed three different selection strategies to leverage information from all remaining chunks.", "Besides, we found the innermost-first encoding scheme works better than the conventional outermost-first encoding scheme.", "Extensive experimental results demonstrate the effectiveness and efficiency of our method.", "However, one of the demerits of our method is the number of chunks, i.e., the maximal depth of entity nesting, must be chosen in advance as a hyper-parameter.", "We will extend it to arbitrary depths as future work.", "This work was partly supported by JST CREST Grant Number JPMJCR1513.", "The authors would like to thank the anonymous reviewers for their instructive comments." ]
[ "objective", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "method", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "other", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "other", "abstain", "result", "result", "result", "result", "result", "other", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "objective", "objective", "method", "objective", "other", "other" ]
[ "In this paper we present the first model for directly synthesizing fluent, natural-sounding spoken audio captions for images that does not require natural language text as an intermediate representation or source of supervision.", "Instead, we connect the image captioning module and the speech synthesis module with a set of discrete, sub-word speech units that are discovered with a self-supervised visual grounding task.", "We conduct experiments on the Flickr8k spoken caption dataset in addition to a novel corpus of spoken audio captions collected for the popular MSCOCO dataset, demonstrating that our generated captions also capture diverse visual semantics of the images they describe.", "We investigate several different intermediate speech representations, and empirically find that the representation must satisfy several important properties to serve as drop-in replacements for text.", "Although there are over 7,000 languages spoken worldwide (Lewis et al., 2016), only several dozen have enough data available to support supervised speech recognition, and many languages do not even employ a writing system (Adda et al., 2016).", "In contrast, most people learn to use spoken language long before they learn to read and write, suggesting that linguistic annotation is not a prerequisite for speech processing systems.", "This line of reasoning motivates research that aims to discover meaningful linguistic abstractions (phones, words, etc.) directly from the speech signal, with the intention that they could reduce the reliance of spoken language systems on text transcripts.", "A rich body of work has recently emerged investigating representation learning for speech using visual grounding objectives (Synnaeve et al., 2014; Harwath and Glass, 2015; Harwath et al., 2016; Kamper et al., 2017; Havard et al., 2019a; Merkx et al., 2019; Chrupaa et al., 2017; Alishahi et al., 2017; Scharenborg et al., 2018; Hsu and Glass, 2018a; Kamper et al., 2018; Surs et al., 2019; Il-harco et al., 2019; Eloff et al., 2019), as well as how word-like and subword-like linguistic units can be made to emerge within these models (Har-wath and Glass, 2017; Harwath et al., 2019; Drexler and Glass, 2017; Alishahi et al., 2017; Harwath et al., 2019; Harwath and Glass, 2019; Havard et al., 2019b; Harwath et al., 2020).", "So far, these efforts have predominantly focused on inference , where the goal is to learn a mapping from speech waveforms to a semantic embedding space.", "Generation of speech conditioned on a point in a semantic space has been less explored, and is what we focus on in this work.", "We hypothesize that generative approaches offer interesting advantages over relying solely on inference.", "For example, prior works have demonstrated the capability of recognizing visually descriptive words, but have not been shown to learn non-visual words or grammar.", "Our experiments show that these aspects of spoken language are learned to some degree by a visually-grounded generative model of speech.", "Specifically, we introduce a model capable of directly generating fluent spoken audio captions of images without the need for natural language text, either as an intermediate representation or a form of supervision during training (Figure 1).", "Tremendous progress has been made recently in natural language image caption generation (Kiros et al., 2014; Mao et al., 2015; Vinyals et al., 2015; Karpathy and Fei-Fei, 2015; Xu et al., 2015; Rennie et al., 2017; Dai and Lin, 2017; Lu et al., 2017; Anderson et al., 2018; Lu et al., 2018) and naturalistic text-to-speech synthesis (TTS) (Ping et al., 2017; Taigman et al., 2017; Wang et al., 2017; Shen et al., 2018; Oord et al., 2016).", "Combining these models provides a means for generating spoken image descrip-a person in a blue jacket is on a snowboard on a snow covered slope a snowboarder is snowboarding on the side of the mountain a snowboarder is snowboarding on the side of the mountain Same unit sequence, different speakers Different unit sequences, same speaker Figure 1: Spoken image captions generated from the proposed model, with diversity in both linguistic content and acoustic properties, controlled through the I2U and the U2S models, respectively.", "tions, but existing approaches for training these models are reliant on text during training.", "Instead, we leverage sub-word speech units discovered using a self-supervised learning objective as a drop-in replacement for the text.", "We hypothesize that by using such techniques, an even wider variety of traditionally text-based NLP models could be applied to speech data without the need for transcription or automatic speech recognition (ASR) systems.", "Because all human languages utilize small, discrete phonetic inventories (International Phonetic Association, 1999), we posit that our framework should be applicable for any language in the world.", "In our experiments, we demonstrate that not just any set of discovered speech units can function in this role.", "We find the greatest success with units that are discrete , exhibit a low frame-rate , and highly robust to speaker and environmental variability.", "The main contributions of our paper are as follows: 1. The first methodology for fluent image-to-speech synthesis that does not rely on text.", "A critical aspect of our approach is factorizing the model into an Image-to-Unit (I2U) module and a Unit-to-Speech (U2S) module, where the speech units are discovered in a self-supervised fashion.", "This approach enables disentanglement of linguistic variability and acoustic/speaker variability.", "2. Extensive analysis on the properties required for learned units to replace text.", "While the idea may seem simple and straightforward, obtaining proper units is not a trivial task.", "In fact, most of the units experimented in this paper fail to serve as drop-in replacements.", "Moreover, we demonstrate that what are deemed good units vary significantly for inference and generation.", "3. Demonstrating insufficiency of beam search-based evaluation.", "We show that even when an I2U model fails to generate sensible caption through beam search decoding, it can still produce reasonable captions by sampling from the posterior, hinting that posterior mode-based evaluation can only inspect limited aspects of a model.", "4. Proposing a semantic diversity-aware metric.", "We identify issues of an existing metric (Vi-jayakumar et al., 2018) and propose M-SPICE for sampling-based evaluation to address the problems.", "5. Over 600,000 spoken audio captions for the MSCOCO dataset.", "We collect 742 hours of speech from 2,352 people tasked with reading each caption out loud.", "This dataset will be made publicly available to support work at the intersection of speech, language, and vision.", "Image-to-Text and Image-to-Speech Captioning.", "Significant progress towards generating realistic (text) captions that describe the content of visual images was made with the advent of deep neural networks (Vinyals et al., 2015; Karpathy and Fei-Fei, 2015; Xu et al., 2015; Anderson et al., 2018).", "Far less work has focused on generating spoken audio captions from natural images.", "Training an image-to-speech system using separate ( image, text ) and ( text, speech ) datasets was explored in (Ma et al., 2019).", "Hasegawa-Johnson et al. (2017) is the only prior work that has explored image-to-speech synthesis without using text, but with limited results.", "In that work, BLEU scores were only computed in terms of unsupervised acoustic units, not an estimate of the actual words produced by the synthesizer, which can be problematic as discussed in Section 4. The resulting captions were not evaluated for fluency, naturalness, or intelligibility, and the BLEU scores in terms of the unsupervised units were very low (0.014 on the MSCOCO test set) compared to ours (0.274).", "Wang et al. (2020b) is a concurrent work that proposes a text-free end-to-end image-to-speech model, which simplifies the task by using pairs of image and synthesized speech generated from a single-speaker TTS model to reduce the acoustic variation.", "In contrast, by leveraging robust learned units, our I2U module can be trained on real speech with abundant variation, and the U2S module serves as a vocoder that requires a small amount of clean speech (transcripts not needed).", "Hence, our system imposes less data constraints yet still outperforms Wang et al. (2020b).", "Voice Conversion without Text aims to convert the speaker identity in a recording while preserving the textual content (Abe et al., 1990; Stylianou et al., 1998; Toda et al., 2007).", "It has recently seen progress using neural approaches (Hsu et al., 2016, 2017a,b; Fang et al., 2018; Chorowski et al., 2018; Chou et al., 2018; Lorenzo-Trueba et al., 2018; Serr et al., 2019), but the most relevant work to our own is the ZeroSpeech 2019 challenge (Dunbar et al., 2019; Tjandra et al., 2019; Cho et al., 2019), which addresses unsupervised learning of discrete speech units that can replace text and be used as input to TTS models.", "Unlike image-to-speech synthesis, these tasks only infer phonetic units from given audio recordings instead of generating ones.", "Speech Pre-Training and Its Applications.", "Interest in this area has recently surged.", "Various learning objectives have been proposed, including auto-encoding with structured latent spaces (van den Oord et al., 2017; Eloff et al., 2019; Chorowski et al., 2019; Hsu et al., 2017b; Hsu and Glass, 2018b; Khurana et al., 2019), predictive coding (Chung et al., 2019; Wang et al., 2020a), contrastive learning (Oord et al., 2018; Schneider et al., 2019), and more.", "Prior work addresses inferring linguistic content such as phones from the learned representations (Baevski et al., 2020; Kharitonov et al., 2020; Hsu et al., 2021).", "In contrast, this work focuses on generating the learned representation from a different modality, which evaluates representations from a different perspective.", "A depiction of our modeling approach is shown in Figure 2. Caption generation for an image involves a cascade of two components: given an input image I , we first generate a linguistic unit sequence U according to the I2U module P ( U | I ) .", "Given the linguistic symbol sequence U , we generate a speech waveform S according to the U2S module P ( S | U ) .", "If the linguistic unit sequence U were to take the form of natural language text, the model would be equivalent to the cascade of a conventional image captioning system followed by a TTS module.", "Note that we assume S I | U because prosody variation is not dependent on the image for the datasets considered.", "The key idea in this paper is to instead define U to be a sequence of learned speech units that are as robust and compact as possible like text, but discovered without text supervision.", "We define inference with this S2U model as U = f ( S ) , enabling us to transcribe any given speech audio waveform S into a sequence of units U .", "The addition of this third component enables us to train P ( U | I ) from a dataset of images paired with spoken captions { ( I 1 , S 1 ) , . . . , ( IN , SN ) } .", "The conditional independence assumption between S and I given the U enables us to choose any arbitrary speech dataset for training P ( S | U ) , therefore enabling the speaker characteristics and other acoustic properties to be independently controllable from the I2U system (Wang et al., 2018; Hsu et al., 2019; Henter et al., 2018; Akuzawa et al., 2018).", "Table 1 summarizes the five datasets used for training S2U, I2U, and U2S models.", "Note that we deliberately choose different datasets for training each module, which aims to examine the robustness of the units when transferring across domains, including shift in speaker demography, speaking style (scripted/spontaneous), and linguistic content (book/newspaper/image description).", "Among the three datasets with image and speech pairs: Places, Flickr8k, MSCOCO, we chose the latter two for training I2U models, because they include five captions per image, which is more suitable for caption metrics such as SPICE (Anderson et al., 2016); moreover, they are commonly used image captioning datasets with many text-based baselines in the literature.", "Places only contains one spoken caption per image and has not been used for captioning.", "Specifically, as part of this work we collect SpokenCOCO , a spoken version of the MSCOCO captioning dataset (Lin et al., 2014) with 742 hours from 2532 speakers, via Amazon Mechanical Turk by displaying the text to a person and having them read it aloud.", "Additional details regarding the dataset can be found in appendix Section A. Note that although there exists a speech version 3 * (ResBlock + VQ) 263 32 32 32 32 208 208 5 5 5 476 570 395", "of MSCOCO named Speech-COCO (Havard et al., 2017), it is comprised of only synthesized speech using a concatenative TTS model in eight speak-ers' voice.", "Disfluencies (e.g. uh) are randomly inserted in between words to imitate real speech.", "Compared to SpokenCOCO, Speech-COCO offers limited diversity and naturalness.", "We propose to build the S2U model upon ResDAVEnet-VQ, an audio-visual grounding model introduced in Harwath et al. (2020) that has shown to learn discrete phoneand wordlike units in the intermediate vector quantizing (VQ) layers.", "This model is trained to associate speech with contextually relevant visual inputs using a triplet loss (Weinberger and Saul, 2009), which can be interpreted as maximizing a mutual information lower bound between image and speech (Tschannen et al., 2020).", "Since visual semantics are described with words, which in turn are composed of phones, the representations learned by ResDAVEnet-VQ are forced to be predictive of words and phones rather than speaker, noise, etc.", "In contrast, many of the speech representations are trained by reconstructing (Chorowski et al., 2019; Hsu et al., 2017b) or predicting unseen speech signals (Chung et al., 2019), which would inevitable capture factors unrelated to the linguistic content.", "To demonstrate the advantage of representation learning with grounding, we will compare ResDAVEnet-VQ with a reconstruction based model, WaveNet-VQ, trained on the PlacesAudio dataset.", "We denote the units extracted from this model with WVQ.", "We use the implementation of Harwath et al. (2020) for ResDAVEnet-VQ, and Cho et al. (2019) for WaveNet-VQ which achieves the best ZeroSpeech 2019 challenge performance.", "Although the ResDAVEnet-VQ model has been shown to be capable of learning both phone-like and word-like units, the experiments in (Harwath et al., 2020) show that only several hundred words are explicitly learned, which tend to be visual words.", "Conversely, the phone-like units learned by the lower VQ layers of the model were shown to cover all of the phones in American English (as there are only several dozens).", "For this reason, we choose to use phone-like units learned by the lower VQ layers to represent U .", "Nominally, the VQ layers will output one-hot vectors at a uniform temporal rate, downsampled with respect to the framerate of the acoustic input depending upon which VQ layer is used.", "Given an input computed with a 10ms frame shift, the two VQ layers investigated in this paper (VQ2 and VQ3) respectively output vectors every 20ms and 40ms.", "In general, the VQ units are repeated for several consecutive frames.", "We can decrease the average length of the symbol sequence U by employing a lossy form of run-length encoding (RLE) (see Figure 2) which retains the sequence of symbol identities but discards duration information.", "Each unit then represents a variable-length segment.", "This removes the burden of unit duration modeling from the I2U model and shifts it onto the U2S model, which we will show to be crucial.", "Both the I2U model and the U2S model are based upon recurrent seq2seq with attention networks (Bahdanau et al., 2015).", "Specifically, we adopt Show-Attend-and-Tell (SAT) (Xu et al., 2015) for the I2U model.", "It has an image encoder pre-trained for classification, which is language agnostic and hence should work in any language within our proposed framework.", "The decoder on the other hand is randomly initialized.", "We train the SAT model for two stages, where the encoder parameters are only updated in the second stage.", "We distinguish the models from the two stages with SAT and SAT-FT (finetuned) respectively when presenting the results.", "For the U2S model, we adopt Tacotron2 (Shen et al., 2018) and WaveGlow (Prenger et al., 2019) for unit-to-spectrogram and spectrogram-to-waveform generation, respectively.", "In particular, a pre-trained WaveGlow is used without fine-tuning.", "The I2U model is trained on ( I, f ( S )) pairs, which requires pairs of image and speech, while the U2S model is trained on ( f ( S ) , S ) pairs, which can be obtained from arbitrary set of speech.", "Both models are trained with the maximum likelihood objective ( E I,U [log P ( U | I )] for I2U and E S,U [log P ( S | U )] for U2S).", "First, how can we measure the performance of an image-to-speech system?", "Our system can fail to produce a good caption if the I2U model fails to encode linguistic/semantic information into the unit sequence, or if the U2S model fails to synthesize an intelligible waveform given a unit sequence.", "To better localize these failure modes, we evaluate the full I2S system as well as the U2S system in isolation.", "We evaluate the U2S system by using it as a vocoder to synthesize unit sequences inferred from real speech and soliciting human judgements in the form of Mean Opinion Score (MOS) and Side-By-Side (SXS) preference tests (Table 2).", "To evaluate the I2S system, we can use any method that measures the semantic information contained in the generated speech.", "We consider two sets of end-to-end metrics: word-based and retrieval-based , and one set of proxy unit-based metrics.", "Word-based metrics transcribe a generated spoken caption into text (manually or with an ASR system) and then measure word-based captioning metrics against a set of reference captions, such as BLEU-4 (Papineni et al., 2002) (adjusted n-gram precision), METEOR (Denkowski and Lavie, 2014) (uni-gram F-score considering word-to-word alignment), ROUGE (Lin, 2004) (n-gram recall), CIDEr (Vedantam et al., 2015) (TF-IDF weighted n-gram cosine similarity), and SPICE (Anderson et al., 2016) (F-score of semantic propositions in scene graphs).", "This enables comparison between image-to-speech systems with a text upperbound, but is not applicable to unwritten languages.", "Retrieval-based metrics include image-to-speech and speech-to-image retrieval (Harwath et al., 2020), which require a separately trained cross-modal retrieval model for evaluation.", "Such metrics are text-free, but they cannot measure other aspects of language generation such as syntactic correctness (partially captured by BLEU-4) and scope of the learned vocabulary.", "Lastly, unit-based metrics are similar to text-based, but replace words with units when computing n-gram statistics.", "However, systems built on different units are not directly comparable, and second, can be inflated if duration is modeled using unit repetition.", "have to be a drop-in replacement for text?", "The most essential differences between text and speech are the amount of information encoded and the sequence lengths.", "Beyond text, speech also encodes prosody, speaker, environment information and the duration for each phone, all of which are minimally correlated with the conditioned images.", "We hypothesize that learned speech units should discard such information in order to seamlessly connect the I2U and U2S modules.", "To verify it, we pay particular attention to the variations of the learned units in frame rate (VQ2/VQ3), encoding of duration information (RLE or not), and robustness to domain shift (WVQ/VQ3).", "Units are run-length encoded by default.", "Table 2a shows the properties of the units before run-length encoding.", "Third, how should language generation models be evaluated more generally?", "We examine evaluation of the I2S model using beam search-based decoding as well as sampling-based decoding.", "We find that because evaluation metrics that are reliant on beam search-based decoding only evaluate the mode of a model's posterior, they do not reflect the ability of a model to generate diverse linguistic content.", "Furthermore, we show that it is possible for a model's posterior mode to be linguistically meaningless, and yet meaningful language can still be generated with sampling-based decoding.", "Towards this end, we introduce a novel multi-hypothesis evaluation metric (M-SPICE), which uses sampling-based decoding (instead of beam search) to generate a set of captions.", "We can then compute the overall coverage of this caption set against a reference; see Section 4.4 for details.", "We construct a Tacotron-2 model for each of the three unit types on the LJSpeech audio data by transcribing each LJSpeech utterances into an unit sequence, then train the U2S model from the RLE-ed unit sequence and spectrogram pairs.", "We evaluate the naturalness of the speech produced by each model on held-out data, both in-domain using LJSpeech and out-of-domain (OOD) using SpokenCOCO.", "1 Amazon Mechanical Turk (AMT) workers performed Side-by-Side preference tests (SXS) and naturalness evaluation based on mean opinion scores (MOS) on a scale from 1 to 5 for each U2S model, which we display in Table 2. Although VQ2 was preferred for in-domain synthesis on LJSpeech, VQ3 achieved the highest scores and least degradation (-0.387) on the out-of-domain SpokenCOCO, indicating that out of the three units VQ3 has the strongest robustness to domain shift.", "We trained an SAT model on SpokenCOCO for each of the three RLE-ed units, as well as VQ3 units without RLE.", "We also compare to text characters and words; the full hyperparameter and training details for all models are provided in Section B in the appendix, but in general we kept these as constant as possible when comparing different linguistic representations.", "Before connecting the U2S model, we noticed that all RLE speech unit models except the one 1 In-domainess is defined with respect to the U2S training data (LJSpeech) not the S2U training data (PlacesAudio).", "(a) Properties of the units and MOS of the U2S models trained on these units with 95% confidence interval.", "ABX errors are computed on the ZeroSpeech 2020 English test set.", "trained on VQ3 units failed during beam search decoding on the test images (WVQ consistently failed, while VQ2 sometimes succeeded); rather than producing a diverse sequence of output units, the decoder would generally get stuck in a loop until the maximum decoding length was reached.", "This also happened using VQ3 units without RLE, indicating that the decoder could not model unit duration.", "Example outputs are provided in Table 3. We hypothesize that the reason the VQ2 and WVQ units failed is due to their lack of invariance to domain shift, as evidenced by their decay in naturalness when used for OOD synthesis as shown in Table 2. This may cause the entropy of the unit distribution conditioned on an image to be higher as each phoneme may be represented by multiple units, and therefore the I2U model suffers from the same looping issues as the unconditional language model of text, as observed in (Holtzman et al., 2018; Fan et al., 2018; Holtzman et al., 2020; model U MSCOCO Flickr8k B-4 M R C S B-4 M R C S Xu et al. (2015) word 0.243 0.239 --0.213 0.203 -Lu et al. (2017) word 0.327 0.260 0.540 1.042 ---Wang et al. (2020b) N/A ---0.035 0.113 0.232 0.080 SAT word 0.315 0.253 0.533 0.984 0.185 0.216 0.207 0.469 0.550 0.149 char 0.289 0.239 0.512 0.879 0.172 0.190 0.190 0.441 0.476 0.136 VQ3 0.186 0.186 0.446 0.584 0.127 0.116 0.141 0.390 0.232 0.091 SAT-FT word 0.339 0.265 0.551 1.062 0.196 0.225 0.215 0.483 0.584 0.155 char 0.323 0.256 0.536 1.002 0.187 0.191 0.196 0.450 0.519 0.143 VQ3 0.233 0.212 0.478 0.732 0.149 0.125 0.145 0.391 0.245 0.095 Table 4: Word-based caption evaluation using BLEU -4 , METEOR, ROUGE, C IDEr, and SPICE.", "Kulikov et al., 2019; Welleck et al., 2020).", "To evaluate the full Image-to-Speech model, we first train an ASR system on the re-synthesized SpokenCOCO captions using the VQ3 Tacotron-2 model.", "This enables us to estimate a word-level transcription of the spoken captions produced by our system.", "In order to verify that the synthesized captions are intelligible to humans and the ASR system did not simply learn to recognize artifacts of the synthesized speech, we asked AMT workers to transcribe into words a set of 500 captions generated by our I2U U2S system and also evaluated their naturalness.", "Three workers transcribed and three workers rated each caption, allowing us to compute an MOS score (3.615 0.038) , a word error rate ( WER ) between the 3 human transcriptions (9.40%), as well as an average WER between the human and ASR-produced transcriptions ( 13.97% ).", "This confirms that our system produces reasonably natural speech and ASR is sufficiently accurate for transcribing synthesized speech.", "Table 4 summarizes our results on MSCOCO and Flickr8k using beam search.", "We compare with the literature for bottom-up text captioning (row 1-2) and text-free end-to-end image-to-speech synthesis (row 3).", "We train the decoder of an SAT model while keeping the image encoder fixed (row 4-6), in addition to fine-tuning the encoder (row 7-9).", "Despite having no access to text, the SAT-FT speech captioning model trained on VQ3 units achieves a BLEU-4 score of .233 with beam search decoding on MSCOCO.", "This is very close to the .243 achieved by the original SAT word-based captioning model.", "Figure 1 shows that the generated captions are fluent and reflect the implicit learning of some syntactic rules.", "It is evident that the proposed model is capable of generating fluent and meaningful image captions.", "Results comparing four unit representations on all three sets of metrics are shown in Table 5. First of all, by comparing word-based and unit-based evaluations, we do note that the relative ranking among VQ3, VQ2, and WVQ is consistent across BLEU-4, METEOR, and ROUGE for SAT models, however, VQ3 \\ RLE achieves abnormally high scores on these metrics despite producing trivial captions for all images as shown in Table 3. This is because unit 32 has learned to represent non-speech frames such as silence, which frequently occurs at both the beginning and end of utterances.", "Without RLE, consecutive strings of 32 units are extremely common in both the candidate and reference captions, which inflates the scores of this model.", "The exception here is the CIDEr metric, Figure 3: MSCOCO test SPICE scores of various units and decoding methods.", "which incorporates TF-IDF weighting that tends to de-emphasize these kinds of uninformative patterns.", "Nonetheless, when comparing SAT and SAT-FT with VQ3 units, CIDEr does not rank them the same as word-based metrics.", "Regarding retrieval-based evaluation, despite the fact that the ResDAVEnet model was only trained on the original, human-spoken captions for the MSCOCO images, it works very well for the fully synthetic captions.", "The speech and image retrieval scores for 1k human-spoken validation captions are 0.867 and 0.828 R@10, respectively, while the SAT-FT VQ3 model achieves 0.766 and 0.765 R@10.", "This indicates that this image-to-speech model is able to infer the salient semantic content of an input image, generate a unit sequence that captures that content, and generate speech that is sufficiently natural sounding for the ResDAVEnet model to recover that semantic information.", "Several of the other image-to-speech models also achieve respectable retrieval performance, and the overall ranking of the models mirrors that which we found when using word-based evaluation metrics.", "The results in the previous section only evaluate beam search decoding with the I2U model, and do not fully reveal the posterior over captions for an input image, or whether the unit representations that failed with beam search would work well with other methods.", "To probe this, we evaluate the models using sampling-based caption generation.", "Figure 3 shows the SPICE scores on SpokenCOCO using beam search and two sampling-based methods.", "VQ3 still performs the best of all unit types with both beam search and sampled decoding.", "VQ2 can sometimes generate captions with beam search when the beam is kept small, but as the beam grows it begins to loop and the scores become very low.", "We see that all unit types can generate reasonable captions when decoding via sampling .", "Moreover, we discovered that 1) ResDAVEnet-VQ units consistently outperform the WaveNet-VQ units, suggesting that they better capture sub-word structure, and 2) VQ3 \\ RLE achieves better scores than VQ2 when using a larger temperature or k for topk .", "We estimated the vocabulary size of the SAT-FT model with VQ3 by counting the number of unique recognized words produced at least 3 times when captioning the SpokenCOCO test images.", "These numbers are shown for the model under the various decoding methods in Figure 4. The number of captions per image is denoted by n , where top candidates are used for beam search and i.i.d. samples are drawn for sampling.", "Sampling-based decoding reveals a larger vocabulary size than beam search, and the number of words learned by our models ( 2 12 ) is far greater than the number of words learned by the ResDAVEnet-VQ model (approx. Speaker Gender Region B4 M S U2S trained on LJSpeech F -0.233 0.212 0.149 U2S trained on VCTK p247 M Scottish 0.234 0.211 0.148 p231 F English 0.233 0.210 0.146 p294 F American 0.236 0.212 0.148 p345 M American 0.234 0.209 0.144 p307 F Canadian 0.234 0.211 0.148 Table 6: Results of disentangled voice control via synthesizing the same units with a single and a multi speaker U2S model. Units are decoded using beam search from the SAT-FT VQ3 MSCOCO model. 279) in (Harwath et al., 2020).", "We hypothesize that training a model to generate spoken captions encourages it to learn many more words than only being trained to retrieve images from captions.", "We also hypothesize that because beam search attempts to find the mode of the posterior over captions, it tends to produce a smaller set of words and does not reveal the breadth of the model distribution.", "The previous section showed that even when the SPICE scores were comparable, sampling-based decoding revealed a much larger model vocabulary than beam search, especially when multiple captions are generated for each image.", "This highlights a limitation of SPICE in measuring the diversity .", "Formally speaking, SPICE computes an F-score between two bags of semantic propositions T ( S ) and T ( c ) parsed from a set of references S = { s i } i and a hypothesis c , where T ( c ) denotes a bag of propositions extracted from a scene graph parsed c , and we can compute that for multiple sentences with T ( S ) = i ( T ( s i )) .", "To extend SPICE for scoring multiple hypotheses C = { c j } J j =1 , one can compute an average SPICE: 1 J (cid:80) j F 1( T ( S ) , T ( c j )) , or use the oracle SPICE proposed in Vijayakumar et al. (2018): max j F 1( T ( S ) , T ( c j )) .", "However, these metrics fail to capture the diversity among hypotheses.", "Consider two hypothesis set, C 1 = { c 11 , c 12 } and C 2 = { c 21 , c 22 } , where T ( c 11 ) = T ( c 12 ) = T ( c 21 ) = {(girl), (table), (girl, sit-at, table)}, T ( c 22 ) = {(girl), (girl, young)}, and T ( S ) = {(girl), (table), (girl, young), (girl, sit-at, table)}.", "To address the deficiencies of the existing metrics, we propose a new metric named multi-candidate SPICE (M-SPICE), which takes the union of the candidate propositions and computes the F-score against the reference propositions: F 1( T ( S ) , j T ( c j )) .", "M-SPICE assigns a higher score if the set captures diverse and correct propositions, and it is obvious that the score of C 2 is higher than C 1 as desired.Figure 5 shows the M-SPICE scores of our SAT-FT model using VQ3 units on SpokenCOCO.", "When evaluating over multiple captions ( n > 1 ), using the beam search hypotheses increases the score less than sampling.", "We examine to what extent the VQ3 units are portable across different speakers by training a U2S model on the VCTK dataset that additionally takes a speaker ID as input.", "The resulting model is able to generate speech with the voice of any VCTK speaker.", "We evaluate the captions produced by this system on SpokenCOCO for 5 speakers in Table 6.", "To compute these scores we transcribe the captions generated by each model into text using the ASR system we describe in Section 4.2, which was solely trained on re-synthesized SpokenCOCO captions using the LJSpeech U2S model.", "The scores in Table 6 indicate not only that the I2U model can be easily integrated with U2S models representing a diverse set of speakers, but also that the LJSpeech ASR system works very well on the speech synthesized from the VCTK models.", "In this paper, we presented the first model capable of generating fluent spoken captions of images without relying on text, which almost matches the performance of early text-based image captioning models.", "Our comprehensive experiments demonstrated that learned units need to be robust, of low framerate, and encoding little or none duration information to be a drop-in replacement for text.", "We also identified the caveats of mode-based evaluation and proposed a new metric to address semantic diversity.", "As part of this work, a novel dataset of over 600k spoken captions for the MSCOCO dataset is introduced, which we will make publicly available to the research community.", "Future work should investigate applying the proposed method to additional languages, devising improved speech unit representations, and jointly training the speech unit model with the I2S model.", "This would offer the opportunity to explore new analysis-by-synthesis training objectives." ]
[ "objective", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "other", "abstain", "objective", "objective", "abstain", "result", "method", "other", "other", "abstain", "objective", "objective", "objective", "objective", "result", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain" ]
[ "Evaluation in NLP is usually done by comparing the scores of competing systems independently averaged over a common set of test instances.", "In this work, we question the use of averages for aggregating evaluation scores into a final number used to decide which system is best, since the average, as well as alternatives such as the median, ignores the pairing arising from the fact that systems are evaluated on the same test instances.", "We illustrate the importance of taking the instance-level pairing of evaluation scores into account and demonstrate, both theoretically and empirically, the advantages of aggregation methods based on pairwise comparisons, such as the BradleyTerry (BT) model, a mechanism based on the estimated probability that a given system scores better than another on the test set.", "By re-evaluating 296 real NLP evaluation setups across four tasks and 18 evaluation metrics, we show that the choice of aggregation mechanism matters and yields different conclusions as to which systems are state of the art in about 30% of the setups.", "To facilitate the adoption of pairwise evaluation, we release a practical tool for performing the full analysis of evaluation scores with the mean, median, BT, and two variants of BT (Elo and TrueSkill), alongside functionality for appropriate statistical testing.", "Research is driven by evaluation results, with attention and resources being focused on methods identified as state of the art (SotA).", "The proper design of evaluation methodology is thus crucial to ensure progress in the field.", "In NLP, evaluation usually consists in comparing the averaged scores of competing systems over a common set of test instances.", "Indeed, averaging scores independently for each system and declaring the one with the highest average to be best is particularly 1 2 3 4 5 Index of test instances 2.5 5.0 7.5 10.0 12.5 15.0 17.5 E v a l u a t i o n s c o r e s o f s y s t e m s ( h i g h e r i s b e tt e r ) System A Mean (same for all) System B Medians System C Figure 1: Motivating example (synthetic data).", "Here, we critically assess the specific choice of the average to aggregate evaluation scores.", "In particular, we emphasize that there is a natural in-stance-level pairing between the evaluation scores of systems, which aggregation mechanisms such as the mean or median fail to take into account: as they produce a score for each system independently, systems that have the same set of scores (but potentially in different order) cannot be distinguished.", "Consider the three systems A , B , and C compared on five test instances in Fig. 1. Despite a complex pairing structure, they all have the same mean score across test instances.", "Moreover, even though B is better than A on all test instances but one, the median of A is greater than the median of B .", "In this work, we discuss an alternative aggregation mechanism: the BradleyTerry (BT) model 2302 (Bradley and Terry, 1952).", "BT compares systems for each test instance and estimates the latent strength of systems based on how frequently one system scores higher than another.", "Such paired mechanisms have already been successfully used to aggregate human judgments (Novikova et al., 2018; Sedoc and Ungar, 2020); for example, WMT evaluation protocols regularly employ TrueSkill (Herbrich et al., 2007), a Bayesian variant of BT (Sakaguchi et al., 2014).", "Contributions.", "We contribute the first comprehensive analysis of the BT model (especially vis--vis mean and median) as an aggregation mechanism for comparing system scores in NLP.", "(i) We illustrate the importance of accounting for instance-level pairing and discuss the conditions under which the mean, median, and BT disagree about the ordering of systems.", "In Sec. 3, we draw parallels with the field of statistical testing, where paired statistical tests are recommended when comparing paired variables.", "Thus, we argue that paired aggregation mechanisms such as BT are more robust alternatives to the mean and median.", "We support this argument with simulations in Sec. 4.", "(ii) We show that the differences between mean, median, and BT matter in practice.", "By re-evalu-ating 296 real NLP evaluation setups across four tasks and 18 evaluation metrics, different aggregation mechanisms yield different conclusions as to which systems are SotA in about 30% of the setups (Sec. 5).", "These results hold when replacing BT by the Elo (Elo, 1978) and TrueSkill variants.", "(iii) We discuss further advantages and potential limitations of BT, alongside possible resolutions, in Sec. 7.", "(iv) We recommend replacing the mean by BT in future evaluations of NLP systems.", "To ease the adoption of more robust aggregation mechanisms, we release Pairformance , 1 a practical tool for performing full analyses of evaluation scores with mean, median, BT, and two variants of BT (Elo and TrueSkill).", "The tool reports paired evaluation results alongside appropriate statistical testing for all five aggregation mechanisms and various visualization functionalities to elucidate the pairing structure between system scores.", "Code and data for replicating our analyses and experiments is available online.", "2 1 https://github.com/epfl-dlab/ pairformance 2 https://github.com/epfl-dlab/BT-eval 2 Aggregation of evaluation results In this section, we briefly present the three aggregation mechanisms we consider.", "1. At least two systems, A and B , to compare, with latent strengths A and B that we aim to estimate.", "2. A test set T = (cid:8) ( x l , y l ) : l = 1 ,..., n (cid:9) consisting of n test instances, where x l is the input and y l is the ground-truth target output.", "3. An evaluation metric M for scoring system outputs based on target outputs y l , resulting in the sequence of evaluation scores MA = (cid:104) M ( A ( x l ) , y l ) : l = 1 ,..., n (cid:105) for system A .", "4. An aggregation mechanism that decides whether system A is better than B based on the evaluation scores of the two systems.", "We use T , M ( A , B ) = ( MA , MB ) to denote the comparison mechanism between A and B on the test set T with evaluation metric M .", "Here, outputs its guess about which system is the best (or declares the comparison inconclusive if the difference is not statistically significant).", "For simplicity, we drop the dependency on T and M in the notation, simply writing ( A , B ) .", "For example in text summarization, x l is a source document from the test set, y l its corresponding reference summary, and M might be ROUGE (Lin, 2004).", "The decision mechanism usually compares the individual systems' mean evaluation scores, where the system with the highest mean score (here mean ROUGE score) is declared better.", "Consistent evaluation result.", "We say that the outcome of such an evaluation is consistent if it recovers the ordering of systems implied by the inherent strengths of systems: ( A , B ) = A A > B .", "Probabilistic model.", "As commonly done in the literature on statistical testing, we view the evaluation scores of a system A as n indexed random variables: X ( l ) A , l = 1 ,..., n , where n is the size of the test set.", "Note that this sequence of random variables is not necessarily i.i.d. Furthermore, even though systems A and B are independent, their evaluation scores are not, since there is an instance-level pairing.", "Intuitively, knowing the score of A on an instance ( x l , y l ) can provide information about the expected 2303 performance of B .", "We now introduce three aggregation mechanisms .", "We investigate their properties in subsequent sections.", "Mean.", "This is the current standard: the system with the highest average score is declared the strongest.", "We denote this aggregation mechanism as MEAN .", "The average score of system A is computed as EA = 1 n n (cid:80) l = 1 X ( l ) A .", "Median.", "The median is an interesting alternative to the mean because it is robust to outliers.", "Here, the system with the highest median score is declared to be the strongest.", "The median score MA of a system A is the central value in the sorted list of evaluation scores of A .", "We denote this aggregation mechanism as MEDIAN .", "Bradley-Terry.", "The third option examined here is the BradleyTerry (BT) model (Bradley and Terry, 1952).", "While MEAN and MEDIAN compute scores for systems A and B independently, BT is a function of the joint random variable (cid:16) X ( l ) A , X ( l ) B (cid:17) .", "BT estimates the relative strengths A and B of the two systems A and B , by comparing the evaluation scores for each test instance: P ( A > B ) = A A + B .", "(1) Intuitively, P ( A > B ) is the probability that, for any given test instance, A scores higher than B .", "The BT model chooses A and B in order to best explain the observations.", "The system with the highest is declared strongest.", "When considering only two systems, the latent strength A is the number of instances for which A scores better than B (and similarly for B ).", "When the number of systems is greater than two, BT solves an iterative optimization algorithm that is guaranteed to converge to a unique solution (Bradley and Terry, 1952).", "We give details about BT and its computation in the general case in Appendix E. We denote as BT the decision mechanism based on the BT model.", "While it is much less common than MEAN and MEDIAN , we will see below that BT satisfies interesting properties making it a more robust alternative.", "Since the roles played by A and B are symmetrical, we now assume without loss of generality that system A is better, i.e., A > B .", "Proposition 1. If A > B then MEAN consistent EA EB > 0 , MEDIAN consistent MA MB > 0 , BT consistent MA B > 0 , where ES and MS are the mean and median of the evaluation scores of system S, and MA B is the median of the differences between the evaluation scores of A and B. Note that ES , MS , and MA B are all random variables.", "The proof is given in Appendix B. Note that, whereas the expectation is linear ( EA EB = EA B ), the median is not (in general, MA MB (cid:54) = MA B ).", "Robustness to ouliers.", "The mean is not robust to outliers: EA B can be swayed above or below the threshold of 0 by a small number of test instances for which the difference between system scores is large.", "On the contrary, the median is a robust statistic that cannot be easily influenced by outliers.", "Similarly, BT is robust to outliers because its decision is based on the median of differences MA B .", "Importance of pairing.", "The critical difference between BT , MEAN , and MEDIAN , is that only BT preserves the pairing information.", "Both MEAN and MEDIAN compute a statistic from the (unordered) set of scores X ( l ) A and X ( l ) B independently and then compare the aggregate statistics, losing the pairing structure.", "If the pairing actually does not matter, any permutation of the indices of system scores leaves the distribution of paired evaluation scores unchanged.", "This happens, for example, when both X ( l ) A and X ( l ) B are i.i.d. 3 However, in the general case, the pairing matters.", "One particular example is when there exist different types of test instances and systems behave differently for different types, e.g., when there are easy instances on which all systems have higher scores.", "For example, consider the three systems and their evaluation scores on five test instances in Fig. 1. System A is worse than C on all instances but one, so C > A according to BT , yet the median of A is greater than the median of C (10 vs. 7).", "At the same time, B outperforms C on all instances 3 More generally, when the two sequences of random variables are exchangeable.", "but one, so B > C according to BT .", "For MEDIAN and MEAN , which ignore the pairing, A and B are completely equivalent, even though there is a clear difference regarding which system is more likely to be the best.", "This difference is revealed in the pairing structure.", "In general, any mechanism ignoring the pairing cannot capture the difference between A and B .", "Choosing an aggregation mechanism.", "In Prop.", "1, we stated the conditions for each mechanism to be consistent .", "Choosing an aggregation mechanism for a specific evaluation setup boils down to deciding what condition is more likely to hold in the setup.", "Note that none of the conditions implies any other condition in Prop.", "1. When comparing BT against MEAN (or MEDIAN ), there are three possible scenarios:", "(i) BT agrees with MEAN (or MEDIAN ),", "(ii) BT is consistent but MEAN (or MEDIAN ) is not, and", "(iii) MEAN (or MEDIAN ) is consistent but BT is not.", "In case", "(ii), for most instances, the better system has a higher score than the worse system, but MEAN (or MEDIAN ) fails.", "For example, MEAN may be swayed by outliers, and MEDIAN may be swayed by jumps in score lists as in the example above.", "In case", "(iii), for most instances, the better system has a lower score than the worse system, yet particular variations in the marginals make the MEAN or MEDIAN get the ordering correct.", "This is a very peculiar scenario: for MEAN , it implies that on the few instances on which the better system did better, the difference between evaluation scores was large enough to lift the mean of the better system above the other.", "We argue that if one really believes that the evaluation setup is likely to be in case", "(iii), then one does not trust the evaluation setup in the first place.", "It corresponds to assuming that the observed scores are inconsistent for the majority of test instances.", "If this is the case, one should rather improve the evaluation setup (e.g., metric, test set) in order to be more representative of the phenomena that one desires to capture.", "Overall, the condition making BT consistent appears to be the most natural one.", "Trusting MEAN or MEDIAN more than BT implies holding an unintuitive belief about the evaluation setup, namely that the better system does worse than the worse system on a majority of test instances.", "From another perspective, the random variables EA EB ( MEAN ) and MA MB ( MEDIAN ) are less likely to be (correctly) greater than zero in the presence of", "(i) complex pairing structures or", "(ii) outliers.", "The variable MA B ( BT ), on the contrary, is not affected by complex pairings or outliers.", "Fig. 2 summarizes the problem of ignoring the pairing and offers a graphical criterion to understand the decisions made by MEAN , MEDIAN , and BT .", "In each plot, the densities are estimated by placing test instances at coordinates given by the evaluation scores of the two systems.", "The evaluation scores of A (green) are on the x -axis, and the evaluation scores of B (blue) on the y -axis.", "We also plot the marginal distributions of evaluation scores, from which we can read off means and medians.", "When the mean of X ( l ) B is greater than that of X ( l ) A , the two extended lines representing the means meet in the upper triangle (above the line XA = XB ), and analogously for the median.", "But mean and median being only functions of the marginals, they completely ignore the pairing.", "Fig. 2 illustrates this by depicting three completely different pairing structures where the marginals (and thus the means and medians) of A and B remain unchanged.", "(In Appendix A.1, we explain how to generate infinitely many such examples.)", "On the contrary, BT , being a property of the pairing (the 2D density), predicts that B is better than A when there is more mass in the upper triangle, i.e., more instances for which B scores higher than A .", "In the middle figure, the pairing indicates that A is better than B , in disagreement with the decisions of MEAN and MEDIAN .", "The above discussion about the differences between MEAN , MEDIAN , and BT has interesting parallels with statistical testing.", "When comparing the means of two systems over the same test set, the recommended statistical test is the paired t -test (Fisher, 1935).", "When comparing medians instead of means, the appropriate test is the sign test, which measures whether the median of the difference is significantly differerent from zero.", "Interestingly, the statistic of the sign test is precisely the one in the condition for BT to be consistent (see Prop. 1).", "Wilcoxon's signed-rank test (Wilcoxon, 1945) is often used as an alternative to the sign test because it has more statistical power (at the cost of making more assumptions).", "However, 2305 10 20 30 40 50 60 Scores of A 10 20 30 40 50 60 S c o r e s o f B Median Mean XA = XB 10 20 30 40 50 60 Scores of A 10 20 30 40 50 60 S c o r e s o f B Median Mean XA = XB 10 20 30 40 50 60 Scores of A 10 20 30 40 50 60 S c o r e s o f B Median Mean XA = XB Figure 2: These 2D plots represent the distribution of test instances with coordinates given by the scores of the two systems being compared, i.e., the x -axis is the score X ( l ) A of system A on some test instance ( x l , y l ) , and the y -axis is the score X ( l ) B of system B on the same instance.", "Divine et al. (2018) showed that Wilcoxon's signed-rank test does not always properly account for the pairing of data, unlike the sign test.", "When performing statistical testing, it seems obvious that we should use the paired version of tests when the data is naturally paired (Rankel et al., 2011).", "Even works discussing statistical testing in NLP recommend Wilcoxon's signed-rank test (Gra-ham, 2015; Owczarzak et al., 2012; Dror et al., 2018).", "Yet, to obtain aggregated scores for systems, the community still mostly uses aggregation mechanisms that ignore the pairing, such as MEAN .", "MEDIAN is the outlier-resistant version of MEAN , and BT is the paired variant of MEDIAN .", "Whenever one recommends a paired test of medians, such as the sign test or Wilcoxon's signed-rank test, to obtain p -values, one should use BT to compare system scores.", "Next, we perform simulations to extend the analysis of the previous section to", "(i) N > 2 systems,", "(ii) finitely many test samples,", "(iii) a practical implementation of BT (for N > 2 systems, BT is an iterative optimization algorithm, as discussed in Appendix E).", "We synthesize evaluation scores with various properties starting with systems of predefined implicit strengths i .", "To create situations where the pairing of evaluation scores matters, we introduce multiple test instance types.", "For each type, systems perform differently but still have the same relative strength ( P ( A > B ) ), differing only by an added offset.", "For example, the evaluation scores obtained by A and B could be sampled from N ( A , ) and N ( B , ) for one test instance type, and by N ( A + (cid:15), ) and N ( B + (cid:15), ) for another type, with (cid:15) being the offset.", "We sample evaluation setups by varying the following properties: the number of systems, the number of test instances, the percentage of outliers, the numbers of test instance types, and the level of noise.", "This results in 2,880 simulated evaluation setups.", "In Appendix A.2, we give the detailed algorithm and parameters used to generate the data.", "In Fig. 3, we report Kendall's between the latent scores i and the aggregated scores estimated by MEAN , MEDIAN , and BT .", "When the evaluation setup does not present any difficulty (Fig.", "3(a,", "b)), all aggregation mechanisms work equally well (within each other's 95% error bounds), improving with more samples (Fig.", "3(b)) and deteriorating with more systems (Fig.", "3(a)).", "Unsurprisingly, MEAN fails in the presence of outliers, whereas MEDIAN and BT are unaffected (Fig.", "3(c, e,", "f)).", "When several types of test instances are considered, MEDIAN begins to fail (Fig.", "3(d)), which is made worse when outliers are also present (Fig.", "3(f)).", "Overall, BT is more robust and does not fail when the pairing matters Fig. 3(g,", "h).", "In this section, we perform large-scale experiments using real evaluation scores from four NLG tasks.", "For summarization, we use the TAC-08, TAC-09, TAC-11 and CNN/DM (Hermann et al., 2015) datasets.", "For machine translation, we use the shared tasks of WMT-17 (Bojar et al., 2017), WMT-18 (Ma et al., 2018), and WMT-19 (Ma et al., 2019).", "For image captioning, we use the MSCOCO (Lin et al., 2014) dataset, and for dialogue, we use the PersonaChat and TopicalChat (Mehri and Eskenazi, 2020) datasets.", "The evaluation scores are obtained with a total of 18 different evaluation metrics: BLEU-[1,2,3,4] (Papineni et al., 2002), ROUGE-[1,2,L] (Lin, 2004), ROUGE-WE-[1,2] (Ng and Abrecht, 2015), JS-[1,2] (Lin et al., 2006), S3-[pyr, resp] (Peyrard et al., 2017), CIDEr (Vedan-tam et al., 2015), Chrfpp (Popovic, 2017), METEOR (Lavie and Agarwal, 2007), MoverScore (Zhao et al., 2019), and BERTScore (Zhang et al., 2020).", "Some metrics are only available for some task; e.g., CIDEr, METEOR are only available for the image captioning task.", "We provide details about datasets, metrics, and their statistics in Appendix A.3.", "Overall, across datasets and metrics we have 296 evaluation setups, 73,471 pairs of systems, and 91,197 test instances.", "We also experiment with sub-sampling different sizes of test sets (see Appendix A.3) to simulate varying train/dev/test splits or cross-validation.", "In Table 1, we report the disagreement between aggregation mechanisms over all the data with three measures: the percentage of pairs ranked in a different order (rescaled version of Kendall's ), the percentage of setups where the state-of-the-art (SotA) systems are different, and the percentage of setups where the top 3 systems are different (com-pared as sets).", "A significant fraction of pairs of systems (about 10%) are ranked differently by different mechanisms.", "More importantly, top systems are often different (in about 40% of setups for top 1 and 50% for top 3).", "We can conclude that the choice of aggregation mechanism has a real impact on evaluation outcome.", "The observed disagreement between the three aggregation metrics implies that we are not in the case depicted by Fig.", "3(a) and Fig.", "3(b), i.e., the pairing matters and there are outliers in real data.", "In the next paragraphs, we break down the disagreement per evaluation metric, task, and test set size.", "Detailed results are provided in Appendix C. Which metrics are impacted most?", "We report in Fig.", "4(a) the percentage of disagreement between aggregation mechanisms per metric averaged over datasets, when subsampling test sets of different sizes uniformly (see Appendix A.3 for details).", "While most metrics are available for all four tasks, METEOR and CIDEr are only available for the captioning task.", "Therefore, the observed disagreements for these metrics may be a feature of the task instead of the metrics.", "Interestingly, recent metrics 2307 Disagree (cid:54) = SotA (cid:54) = Top-3 MEAN vs. MEDIAN 4% 18% 30% MEAN vs. BT 9% 40% 49% MEDIAN vs. BT 9% 41% 55% Table 1: Disagreement between aggregation mechanisms.", "such as BERTScore and MOVERScore seem less affected.", "On the other hand, BLEU variants are the most impacted, particularly when comparing MEAN or MEDIAN against BT .", "The disagreement between MEAN and MEDIAN is stable across metrics.", "In general, MEAN and MEDIAN are more in agreement with one another than they are with BT , which indicates that pairing issues have a stronger effect than outliers.", "Which tasks are impacted most?", "Fig.", "4(b) summarizes an analysis as above, but across tasks instead of metrics.", "Again, to control for the fact that some tasks may have larger datasets, we subsample uniformly from various test set sizes.", "The results are averaged over evaluation metrics.", "Machine translation and summarization suffer the least while dialogue and image captioning display larger disagreement between aggregation mechanisms.", "This suggests important future research directions to improve the evaluation setups in these tasks.", "Importance of dataset size.", "In Fig.", "4(c), we report disagreement across test set sizes, while averaging over datasets and evaluation metrics.", "It is reassuring to observe that with larger test sets, the different mechanisms tend to agree more, such that it matters less which one is actually chosen.", "However, for MEAN vs. BT and MEDIAN vs. BT , the disagreement does not continue to decrease below 10% with more test instances.", "For MEAN and BT the disagreement is lower but exhibits the same behavior, never falling below a certain threshold.", "Different perspectives on uncertainty.", "In standard evaluation setups, not only system scores are reported but also whether the differences are statistically significant (Dror et al., 2018).", "Therefore, we ask how often differences that are statistically significant for one test are also statistically significant for another.", "The details of this experiments are presented in Appendix D and show, perhaps unsurprisingly, different behavior for different tests.", "In particular, the paired t -test is the one that most often finds differences to be significant (for 41% of pairs); Mood's test, an unpaired test to compare medians, finds significance for only 21% of pairs; and the sign test and Wilcoxon's sign-rank test (re-lated to BT ) are in between (for 35% and 40% of the pairs, respectively).", "Sources of disagreement.", "Based on the analysis of Sec. 3, we know that the difference between MEAN and MEDIAN is due to the presence of statistical outliers, while the difference between MEDIAN and BT is due to the presence of different test instance types (Fig. 3).", "With real NLP datasets, in Fig. 4, we observe some discrepancy between MEAN and MEDIAN , indicating the presence of outliers.", "There is even more disagreement between MEDIAN and BT , indicating the presence of different types of test instances, as illustrated in Fig. 3. 6 Related work Several studies have made a critical assessment of the standard evaluation methodologies.", "For example, Freitag et al. (2020) demonstrate the advantages of carefully choosing which references to use for NLG evaluation.", "Mathur et al. (2020) show that outliers matter in practice.", "Recently, Graham et al. (2020) draws attention on test set size.", "Several works have emphasized the importance of careful statistical testing (Rankel et al., 2011; Owczarzak et al., 2012; Graham, 2015; Dror et al., 2018).", "They recommend paired statistical tests.", "Finally, Novikova et al. (2018) report that rela-tive rankings yield more discriminative results than absolute assessments, which further motivates aggregation mechanisms like BT .", "Aggregations.", "Pairwise comparison mechanisms date back to Thurstone (1927).", "Subsequently, the Bradley-Terry (BT) model has become a standard pairwise comparison model (Bradley and Terry, 1952).", "In NLP, BT-inspired mechanisms have sometimes been used to aggregate human assessments.", "For instance, Deriu et al. (2020) ranked chatbots regarding their ability to mimic conversational behavior of humans.", "Item response theory (IRT) has a similar formulation as BT, but also estimates the difficulty of each test instances using a latent-variable Bayesian model (Dras, 2015).", "IRT has been applied to perform dataset filtering (Lalor et al., 2016, 2019), evaluate chatbots from human assessments (Sedoc and Ungar, 2020), and aggregate human assessments in machine translation (Dras, 2015).", "Elo (Elo, 1978) and TrueSkill (Herbrich et al., 2007) are famous extensions of the BT model commonly used to rate players in the context of gaming or sports events.", "Elo views player strengths as normally distributed random variables.", "TrueSkill is a Bayesian variant of Elo.", "Since 2015, the Workshop on Machine Translation (WMT) has been using TrueSkill to rank models based on human assessments following the methodology of Sakaguchi et al. (2014).", "We provide a detailed presentation and comparison of BT, Elo, and TrueSkill in Appendix G, and make both Elo and TrueSkill available as alternatives to BT in the released tool.", "The arguments in favor of BT made in this work transfer to its variants, including IRT, Elo, and TrueSkill, and the conclusions drawn from the experiments of Sec. 5 still hold when replacing BT by Elo or TrueSkill (Appendix G).", "Our work extends previous works that has considered BT variants by analyzing the potential causes for disagreement with MEAN and MEDIAN and by measuring the disagreement in real NLP evaluation setups.", "We briefly discuss some possible questions raised by the use of BT -like metrics, with more details provided in Appendix E, F, G, and H.", "Extension to other evaluation setups.", "The experiments of Sec. 5 focus on reference-based NLG evaluation metrics.", "However, the arguments laid out throughout the paper apply beyond this setup.", "Any comparison of systems based on score aggregation is susceptible to suffer from outliers and complex pairing structures (e.g., Fig. 2).", "Future work should replicate our experimental setup for reference-free NLG (Zhao et al., 2020), classifica-tion, or regression tasks.", "Type imbalance.", "Imagine a test set with a majority of easy instances and few hard ones.", "A system A could perform slightly worse than B on easy instances but much better on hard ones and will be declared worse by BT .", "If one views this decision as problematic then one should probably acknowledge that the test set is not representative of what should be measured.", "If hard instances matter more there should be a majority of them in the test set.", "Hoping that MEAN will be swayed to output the intuitive ordering of systems from a minority of test instances is a peculiar expectation to have about the evaluation setup.", "To diagnose such pathological cases, our tool, Pairformance, offers the possibility to view pairwise plots (as in Fig. 2) and histograms 2309 of score differences.", "More generally, better aggregation mechanisms such as BT do not solve all potential problems of evaluation methodologies.", "Other aspects (such as choosing evaluation metrics or meaningful, representative, and large test sets) are all independent of the choice of aggregation mechanism, but also critical to the quality of the evaluation.", "Transitivity.", "BT is not computed independently for each system, and it can happen that adding or removing a baseline impacts the scores of other systems.", "We explain this phenomenon in Appendix F and show that it is rarely a problem in real data.", "More generally, we discuss the connection with Arrow's impossibility theorem in the context of the aggregation of social preferences (Arrow, 1950).", "The Pairformance tool gets around this difficulty by offering the possibility of analyzing each pair of systems independently.", "Relaxing assumptions.", "BT assumes that the relative strengths of systems remain constant across test instances.", "This might not always be true, especially when some systems are crafted for some specific kind of instances but perform badly on others.", "In such cases, BT still produces meaningful and easily interpretable results but fails to capture the latent structure of system strengths.", "Several refinements of BT are possible; e.g., item response theory extends BT by modeling instance difficulty, and Elo and TrueSkill allow system strengths to be stochastic and vary across instances.", "These refinements come at the cost of introducing new parameters, and it remains unclear how to choose these parameters in practice.", "Future work should investigate systematic ways to choose these parameters.", "Tool description.", "We release Pairformance, a tool for performing full diagnostic analyses based on an evaluation dataframe made of the evaluation scores of systems and baselines.", "It can perform the analysis based on MEAN , MEDIAN , BT , Elo, and TrueSkill.", "For each aggregation technique, it outputs a full pairwise analysis of all pairs of systems.", "For MEAN and MEDIAN it compares score differences for pairs of systems.", "For BT , Elo, and TrueSkill, it estimates the probability that one system is better than another.", "All analysis is accompanied by appropritate statistical testing.", "See Fig. 5 for an example based on the BT mechanism.", "Furthermore, the tool can plot the histogram of paired differences X ( l ) A X ( l ) B , allowing for the direct iden-j hu pb m t .", "tification of pathological patterns such as those discussed above.", "We performed a critical assessment of the standard NLP evaluation methodology based on averaged scores, which ignores the natural instance-level pairing of evaluation scores when comparing systems.", "We showed the importance of the pairing and demonstrated the advantages of paired mechanisms such as BradleyTerry ( BT ) over more standard aggregation schemes such as the mean or median.", "The choice of aggregation mechanism matters in real evaluation setups, and we therefore recommend BT as a robust aggregation mechanism.", "To facilitate adoption, we release Pairformance, a new tool to perform full analyses of system scores using BT and two of its variants, Elo and TrueSkill.", "We thank the anonymous reviewers for their insightful comments and suggestions, which greatly improved the final version of the paper.", "With support from Swiss National Science Foundation (grant 200021_185043), European Union (TAILOR, grant 952215), and gifts from Google, Facebook, Microsoft." ]
[ "abstain", "method", "objective", "result", "method", "abstain", "abstain", "abstain", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "method", "objective", "other", "other" ]
[ "Noise Stability Regularization for Improving BERT Fine-tuning Hang Hua 1 * , Xingjian Li 2 , 3 * , Dejing Dou 2 (cid:66) , Chengzhong Xu 3 , Jiebo Luo 1 1 University of Rochester, Rochester, NY, USA { hhua2, jluo } @cs.rochester.edu 2 Big Data Lab, Baidu Research, Beijing, China { lixingjian, doudejing } @baidu.com 3 Department of Computer Science, University of Macau, Macau, China [email protected] Abstract Fine-tuning pre-trained language models such as BERT has become a common practice dominating leaderboards across various NLP tasks.", "Despite its recent success and wide adoption, this process is unstable when there are only a small number of training samples available.", "The brittleness of this process is often reflected by the sensitivity to random seeds.", "In this paper, we propose to tackle this problem based on the noise stability property of deep nets, which is investigated in recent literature (Arora et al., 2018; Sanyal et al., 2020).", "Specifically, we introduce a novel and effective regularization method to improve fine-tuning on NLP tasks, referred to as L ayer-wise N oise S tability R egularization ( LNSR ).", "We extend the theories about adding noise to the input and prove that our method gives a stabler regularization effect.", "We provide supportive evidence by experimentally confirming that well-performing models show a low sensitivity to noise and fine-tuning with LNSR exhibits clearly higher generalizability and stability.", "Furthermore, our method also demonstrates advantages over other state-of-the-art algorithms including L 2 SP (Li et al., 2018), Mixout (Lee et al., 2020) and SMART (Jiang et al., 2020).", "Large-scale pre-trained language models such as BERT (Devlin et al., 2019) have been widely used in natural language processing tasks (Guu et al., 2020; Liu, 2019; Wadden et al., 2019; Zhu et al., 2020b).", "A typical process of training a supervised downstream dataset is to fine-tune a pre-trained model for a few epochs.", "In this process, most of the model's parameters are reused, while a random initialized task-specific layer is added to adapt the model to the new task.", "Fine-tuning BERT has significantly boosted the state of the art performance on natural lanContribution during internship at Baidu Research.", "guage understanding (NLU) benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019).", "However, despite the impressive empirical results, this process remains unstable due to the randomness involved by data shuffling and the initialization of the task-specific layer.", "The observed instability in fine-tuning BERT was first discovered by Devlin et al. (2019); Dodge et al. (2020), and several approaches have been proposed to solve this problem (Lee et al., 2020; Zhang et al., 2020; Mosbach et al., 2020).", "In this study, we consider the fine-tuning stability of BERT from the perspective of the sensitivity to input perturbation.", "This is motivated by Arora et al. (2018) and Sanyal et al. (2020) who show that noise injected at the lower layers has very lit-tle effect on the higher layers for neural networks with good generalizability.", "However, for a well pre-trained BERT, we find that the higher layers are still very sensitive to the lower layer's perturbation (as shown in Figure 1), implying that the high level representations of the pre-trained BERT may not generalize well on downstreaming tasks and consequently lead to instability.", "This phenomenon coincides with the observation that transferring the top pre-trained layers of BERT slows down learning and hurts performance (Zhang et al., 2020).", "In addition, Yosinski et al. (2014) also point out that in transfer learning models for object recognition, the lower pre-trained layers learn more general features while the higher layers closer to the output specialize more to the pre-training tasks.", "We argue that this result also applies to BERT.", "Intuitively, if a trained model is insensitive to the perturbation of the lower layers' output, then the model is con-fident about the output, and vice versa.", "Based on the above theoretical and empirical results, we propose a simple and effective regularization method to reduce the noise sensitivity of BERT and thus improve the stability and performance of fine-tuned BERT.", "To verify our approach, we conduct extensive experiments on different few-sample (fewer than 10k training samples) NLP tasks, including CoLA (Warstadt et al., 2019), MRPC (Dolan and Brock-ett, 2005), RTE (Wang et al., 2018; Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007), and STS-B (Cer et al., 2017).", "With the layer-wise noise stability regularization, we obtain strong empirical performance.", "Compared with other state-of-the-art models, our approach not only improves the fine-tuning stability (with a smaller standard deviation) but also consistently improve the overall performance (with a larger mean, median and maximum).", "In summary, our main contributions are: We propose a lightweight and effective regularization method, referred to as Layer-wise Noise Stability Regularization (LNSR) to improve the local Lipschitz continuity of each BERT layer and thus ensure the smoothness of the whole model.", "The empirical results show that the fine-tuned BERT models regularized with LNSR obtain significantly more accurate and stable results.", "LNSR also outperforms other state-of-the-art methods aiming at stabilizing fine-tuning such as L 2 -SP (Li et al., 2018), Mixout (Lee et al., 2020) and SMART (Jiang et al., 2020).", "We are the first to study the effect of noise stability in NLP tasks.", "We extend classic theories of adding noise to explicitly constraining the output consistency when adding noise to the input.", "We theoretically prove that our proposed layer-wise noise stability regularizer is equivalent to a special case of the Tikhonov regularizer, which serves as a stabler regularizer than simply adding noise to the input (Ri-fai et al., 2011).", "We investigate the relation of the noise stability property to the generalizability of BERT.", "We find that in general, models with good generalizability tend to be insensitive to noise perturbation; the lower layers of BERT show a better error resilience property but the higher layers of BERT remain sensitive to the lower layers' perturbation (as is depicted in Figure 1).", "Pre-training has been well studied in machine learning and natural language processing (Erhan et al., 2009, 2010).", "Mikolov et al. (2013) and Pennington et al. (2014) proposed to use distributional representations (i.e., word embeddings) for individual words.", "Dai and Le (2015) proposed to train a language model or an auto-encoder with unlabeled data and then leveraged the obtained model to finetune downstream tasks.", "Recently, pretrained language models, like ELMo (Peters et al., 2018), GPT/GPT-2 (Radford, 2018; Radford et al., 2019), BERT (Devlin et al., 2019), cross-lingual language model (briefly, XLM) (Lample and Con-neau, 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019) and ALBERT(Lan et al., 2020) have attracted more and more attention in natural language processing communities.", "The models are first pre-trained on large amount of unlabeled data to capture rich representations of the input, and then applied to the downstream tasks by either providing context-aware embeddings of an input sequence (Peters et al., 2018), or initializing the parameters of the downstream model (Devlin et al., 2019) for fine-tuning.", "Such pre-training approaches deliver decent performance on natural language understanding tasks.", "Fine-tuning instability of BERT has been reported in various previous works.", "Devlin et al. (2019) report instabilities when fine-tuning BERT on small datasets and resort to performing multiple restarts of fine-tuning and selecting the model that performs best on the development set.", "Dodge et al. (2020) performs a large-scale empirical investigation of the fine-tuning instability of BERT.", "They found dramatic variations in fine-tuning accuracy across multiple restarts and argue how it might be related to the choice of random seed and the dataset size.", "Lee et al. (2020) propose a new regularization method named Mixout to improve the stability and performance of fine-tuning BERT.", "Zhang et al. (2020) evaluate the importance of debiasing step empirically by fine-tuning BERT with both BERTAdam and standard Adam optimizer (Kingma and Ba, 2015) and propose a re-initialization method to get a better initialization point for fine-tuning optimization.", "Mosbach et al. (2020) analyses the cause of fine-tuning instability and propose a simple but strong baseline (small learning rate combined with bias correction).", "There has been several regularization approaches to stabilizing the performance of models.", "Loshchilov and Hutter (2019) propose a decoupled weight decay regularizer integrated in Adam (Kingma and Ba, 2015) optimizer to prevent neural networks from being too complicate.", "Gunel et al. (2020) use contrastive learning method to augment training set to improve the generalization performance.", "In addition, spectral norm (Yoshida and Miyato, 2017; Roth et al., 2019) serves as a general method can also be used to constrain the Lipschitz continuous of matrix, which can increase the stability of generalized neural networks.", "There are also several noise-based methods have been proposed to improve the generalizability of pre-trained language models, including SMART (Jiang et al., 2020), FreeLB (Zhu et al., 2020a) and R3F (Aghajanyan et al., 2020).", "They achieves state of the art performance on GLUE, SNLI (Bowman et al., 2015), SciTail (Khot et al., 2018), and ANLI (Nie et al., 2020) NLU benchmarks.", "Most of these algorithms employ adversarial training method to improve the robustness of language model fine-tuing.", "SMART uses an adversarial methodology to encourage models to be smooth within a neighborhoods of all the inputs; FreeLB optimizes a direct adversarial loss through iterative gradient ascent steps; R3F removes the adversarial nature of SMART and optimize the smoothness of the whole model directly.", "Different from these methods, our proposed method does not adopt the adversarial training strategy, we optimize the smoothness of each layer of BERT directly and thus improve the stability of whole model.", "One of the central issues in neural network training is to determine the optimal degree of complexity for the model.", "A model which is too limited will not sufficiently capture the structure in the data, while one which is too complex will model the noise on the data (the phenomenon of over-fitting).", "In either case, the performance on new data, that is the ability of the network to generalize, will be poor.", "The problem can be regarded as one of find-ing the optimal trade-off between the high bias of a model which is too inflexible and the high variance of a model with too much freedom (Geman et al., 1992; Bishop, 1995; Novak et al., 2018; Bishop, 1991).", "To control the trade-off of bias against variance of BERT models, we impose an explicit noise regularization method.", "Denoting the training set as D , we give the general form of optimization objective for a BERT model f ( ; ) with L layers, as following:", "= arg min E ( x,y ) D [ L ( f ( x ; ) , y )+ R ( )] .", "(1) To represent R ( ) , we first define the injection position as the input of layer b which is denoted as x b .", "Denoting the regularization weight corresponding to each f b,r as b,r , given a sample ( x, y ) D , the regularization term is represented by the following formulas: R ( ) = L (cid:88) r = b b,r || f b,r ( x b + ) f b,r ( x b ) || 2 .", "If the regularization is operated at the output of layer r , we can further denote the function between layer b and r as f b,r , satisfying that 1 < = b < = r < = L .", "To implement the noise stability regularization, we inject a Gaussian-like noise vector to x b and get a neighborhood x b + .", "Specifically, each element i is independently randomly sampled from a Gaussian distribution with the mean of zero and the standard deviation of as i N (0 , 2 ) .", "The probability density function of the noise distribution can be written as p ( i ) = 1 2 e 2 i 2 2 .", "Our goal is to minimize the discrepancy between their outputs over f b,r defined as || f b,r ( x b + ) f b,r ( x b ) || 2 .", "In our framework, we use a fixed position b as the position of noise injection and constrain the output distance on all layers following layer b .", "Regularzation is a kind of commonly used techniques to reduce the function complexity and, as a result, to make the learned model generalize well on unseen examples.", "In this part, we theoretically prove that the proposed LNSR algorithm has the effects of encouraging the local Lipschitz continuity and imposing a Tikhonov regularizer under different assumptions.", "For simplicity, we omit the notations about the layer number in this part, denoting f as the target function and x as the input of f parameterized by .", "Given a sample ( x, y ) D , we discuss the general form of the noise stability defined as following: R ( ) = E {(cid:107) f ( x + ) f ( x ) (cid:107) 2 } .", "Lipschitz continuity .", "The Lipschitz property reflects the degree of smoothness for a function.", "Recent theoretical studies on deep learning has revealed the close connection between Lipschitz property and generalization (Bartlett et al., 2017; Neyshabur et al., 2017).", "(cid:107) f ( x + ) f ( x ) (cid:107) 2 (cid:107) x + x (cid:107) 2 .", "Thus the noise stability regularization can be regarded as minimizing the Lipschitz constant in a local region around the input x .", "Tikhonov regularizer .", "The Tikhonov regularizer (Willoughby, 1979) involves constraints on the derivatives of the objective function with respect to different orders.", "For the simplest first-order case, it can be regarded as imposing robustness and shaping a flatter loss surface at input, which makes the learned function smoother.", "Assuming that the magnitude of is small, we can expand the first term as a Taylor approximation as: f ( x + ) = f ( x )+ J f ( x )+12 T H f ( x ) + O ( 3 ) , (5) where J f ( x ) and H f ( x ) refer to the Jacobian and Hessian of f with respect to the input x respectively.", "Ignoring the higher order term O ( 3 ) and denoting f k as the k-th output of the function f , we can rewrite the regularizer by substituting Eq.", "5 in Eq.", "3 as: R ( )= E {(cid:107) J f ( x ) + 1 2 T H f ( x ) (cid:107) 2 } = (cid:90) (cid:107) J f ( x ) + 1 2 T H f ( x ) (cid:107) 2 p ( ) d = (cid:88) k (cid:90) (cid:107) J f k ( x )+ 1 2 T H f k ( x ) (cid:107) 2 p ( ) d.", "(6) We define the input vector x as ( x 1 , x 2 , ......, x d 1 ) and noise vector as = ( 1 , 2 , ......, d 1 ) .", "Assuming that distributions of the noise and the input are irrelevant, and the derivative of f with respect to different elements of the input vector is independent with each other, we expand the second order term corresponding to the Jacobian as: J ( f k ) = (cid:90) (cid:107) J f k ( x ) (cid:107) 2 p ( ) d = (cid:90) (cid:88) i ( i f k x i ) 2 p ( ) d = (cid:88) i ( f k x i ) 2 (cid:90) 2 i p ( ) d = 2 (cid:107) J f k ( x ) (cid:107) 2 .", "(7) According to the characteristics of the Gaussian distribution, we also have (cid:90) i j p ( ) d = 0 for any i (cid:54) = j.", "Thus, we can rewrite the second order term corresponding to the Hessian in Eq.", "6 as: H ( f k ) = (cid:90) (cid:107) 1 2 T H f k ( x ) (cid:107) 2 p ( ) d = (cid:90) 1 4 (cid:88) i ( 2 i 2 f k x 2 i ) 2 p ( ) d = 1 4 (cid:88) i ( 2 f k x 2 i ) 2 (cid:90) 4 i p ( ) d = C (cid:107) Tr( H f k ( x ) TH f k ( x )) (cid:107) 2 , (9) Where C is a constant independent of the input x .", "The third term generated from the expansion of Eq.", "6 is zero as we have (cid:82) 3 p ( ) d = 0 .", "Thus we get R ( ) = (cid:88) k { J ( f k ) + H ( f k ) } .", "Considering that the input and output of the function f are both scalar variable, the Tikhonov regularization (Willoughby, 1979) takes the general form as: RT ( ) = (cid:88) r (cid:90) h r ( x )( r f x r ) 2 dx.", "Eq.", "10 shows that our proposed regularizer ensuring the noise stability is equivalent to a special case of the Tikhonov regularizer, where we involve the first and second order derivatives of the objective function f .", "An alternative for improving the robustness is to directly add noise to the input, without explicitly constraining the output stability.", "(Rifai et al., 2011) has derived that adding noise to the input has the effect of penalizing both the L 2 -norm of the Jacobian (cid:107) J f ( x ) (cid:107) 2 and the trace of the Hessian Tr( H f ( x )) , whereas the Hessian term is not constrained to be positive.", "While the regularizer brought by our proposed LNSR is guaranteed to be positive by involving the sum of squares of the first and second order derivatives.", "Moreover, our work relaxes the assumption of MSE regression loss required by (Rifai et al., 2011).", "By imposing the explicit constraint of noise stability on middle layer representations, we extend the theoretical understanding of noise stability into deep learning algorithms.", "Algorithm 1 Layer-wise Noise Stability Regularization (LNSR) Input: Training set D , perturbation bound , learning rate , number of layers L , number of training epochs N , function f and its corresponding parameters , the position of noise injection b , and regularization weights for each layer { b , ..., L } .", "1: Initialize 2: for epoch= 1 , 2 , ..., N do 3: for minibatch B D do 4: R 0 5: for each x B do 6: N (0 , 2 ) 7: x x + 8: forward pass given x and x as inputs 9: for r = b, b + 1 , ..., L do 10: R R + b,r || f b,r ( x ) f b,r ( x ) || 2 11: end for 12: end for 13: g 1 | B | (cid:80) ( x,y ) [ L ( f ( x ; ) , y ) + R ] 14: g 15: end for 16: end for Output: 4 Experiments In this section, we experimentally demostrate the effectiveness of LNSR method on text classification tasks over other regularization methods, and confirm that the insensitivity to noise promotes the generalizability and stability of BERT.", "We conduct experiments on four few-sample (less than 10k training samples) text classification tasks", "Corpus of Linguistic Acceptability (CoLA (Warstadt et al., 2019)) consists of English acceptability judgments drawn from books and journal articles on linguistic theory.", "Each example is a sequence of words annotated with whether it is a grammatical English sentence.", "This is a binary classification task and Matthews correlation coeffi-cient (MCC) (Matthews, 1975) is used to evaluate the performance.", "Microsoft Research Paraphrase Corpus (MRPC (Dolan and Brockett, 2005)) is a corpus of sentence pairs with human annotations for whether the sentences in the pair are semantically equivalent.", "The evaluation metrics is the average of F1 and Accuracy.", "Recognizing Textual Entailment (RTE (Wang et al., 2018)) (Dagan et al., 2005) (Bar-Haim et al., 2006) (Giampiccolo et al., 2007) is a corpus of textual entailment, and each example is a sentence pair annotated whether the first entails the second.", "The evaluation metrics is Accuracy.", "Semantic Textual Similarity Benchmark (STS-B (Cer et al., 2017))is a regression task.", "Each example is a sentence pair and is human-annotated with a similarity score from 1 to 5; the task is to predict these scores.", "The evaluation metrics is the average of Pearson and Spearman correlation coef-1 https://gluebenchmark.com/ ficients.", "We use BERT (Devlin et al., 2019), a large-scale bidirectional pre-trained language model as the base model in all experiments.", "We adopt pytorch edition implemented by Wolf et al. (2019).", "Fine-tuning .", "We use the standard BERT fine-tuning method described in Devlin et al. (2019).", "L 2 -SP (Li et al., 2018) is a regularization scheme that explicitly promotes the similarity of the final solution with the initial model.", "It is usually used for preventing pre-trained models from catastrophic forgetting.", "We adopt the form of ( w ) = 2 || w s w 0 s || + 2 || w s || .", "Mixout (Lee et al., 2020) is a stochastic regularization technique motivated by Dropout (Srivastava et al., 2014) and DropConnect (Wan et al., 2013).", "At each training iteration, each model parameter is replaced with its pre-trained value with probability p .", "The goal is to improve the generalizability of pre-trained language models.", "SMART (Jiang et al., 2020) imposes an smoothness regularizer inducing an adversarial manner to control the model complexity at the fine-tuning stage.", "It also employs a class of Bregman proximal point optimization methods to prevent the model from aggressively updating during fine-tuning.", "Our model is implemented using Pytorch based on Transformers framework 2 .", "Specifically, we use the learning setup and hyperparameters recommended by (Devlin et al., 2019).", "We use Hugging-face edition Adam (Kingma and Ba, 2015) optimizer (without bias correction) with learning rate of 2 10 5 , 1 = 0 .", "9 , 2 = 0 .", "999 , and warmup over the first 10% steps of the total steps.", "We finetune the entire model (340 million parameters), of which the vast majority start as pre-trained weights (BERT-Large-Uncased) and the classification layer (2048 parameters).", "Weights of the classification layer are initialized with N (0 , 0 . 02 2 ) .", "We train with a batch size of 32 for 3 epochs.", "More details of our experimental setup are described in Appendix A. 4.4 Overall Performance Table 1 shows the results of all the models on selected GLUE datasets.", "We train each dataset over 25 random seeds.", "To implement our LNSR, we uniformly inject noise at the first layer on BERT-large for the comparison with baseline models.", "As we can see from the table, our model outperforms all the baseline models in mean and max values, which indicates the stronger generalizability of our model against other baseline models.", "The p-values between the accuracy distributions of standard BERT fine-tuning and our model are calculated to verify whether the improvements are significant.", "We obtain very small p-values in all tasks: RTE: 9 .", "7 10 7 , MRPC: 2 .", "3 10 4 , CoLA: 4 .", "7 10 8 , STS-2: 3 .", "3 10 8 .", "Standard deviation is an indicator of the stability of models' performance and higher std means more sensitive to random seeds.", "Our model shows a lower standard deviation on each task, which means our model is less sensitive to random seeds than other models.", "Figure 2 presents a clearer illustration.", "To sum up, our proposed method can effectively improve the performance and stability of fine-tuning BERT.", "To verify the effectiveness of our proposed LNSR model, we conduct several ablation experiments including fine-tuning with more training epochs and", "2 https://huggingface.co/transformers/index.html", "noise perturbation without regularization (we inject noise directly to the output of a specific layer, and then use the perturbed representation to conduct propagation and then calculate loss, this process is similar to a vector-space represent augmentation).", "The results are shown in Table 2.", "We observe that benefit obtained by longer training is limited.", "Similarly, fine-tuning with noise perturbation only achieves slightly better results on two of these tasks, showing that simply adding noise without an explicit restriction on outputs may not be sufficient to obtain good generalizability.", "While BERT models with LNSR perform better on each task.", "This veri-fies our claim that LNSR can promote the stability of BERT fine-tuning and meanwhile improve the generalizability of the BERT model.", "We verify the effects of our proposed method on the generalizability of BERT models in two ways generalization gap and models' performance on fewer training samples.", "Due to the limited data and the extremely high complexity of BERT model, bad fine-tuning start point makes the adapted model overfit the training data and does not generalize well to unseen data.", "Generalizability of models can be intuitively reflected by generalization gap and models' performance on fewer training samples.", "Table 3 shows the mean training Acc, mean evaluation Acc and generalization gap of different models on each task.", "As we can see from the table, fine-tuning with LNSR can effectively narrow the RTE MRPC CoLA STS-B mean std max mean std max mean std max mean std max FT 70 .", "13 1 .", "84 72 .", "56 87 .", "57 0 .", "92 89 .", "16 61 .", "56 1 .", "34 64 .", "10 89 .", "38 0 .", "53 90 .", "23 FT (4 Epochs) 70 .", "69 1 .", "97 73 .", "65 88 .", "15 0 .", "65 89 .", "21 60 .", "69 1 .", "24 62 .", "09 89 .", "29 0 .", "56 90 .", "12 FT+Noise 70 .", "62 1 .", "56 72 .", "93 87 .", "95 0 .", "83 89 .", "33 60 .", "18 1 .", "58 62 .", "59 89 .", "34 0 .", "51 90 .", "11 LNSR (ours) 73 .", "31 1 .", "55 76 .", "17 88 .", "50 0 .", "56 90 .", "02 63 .", "35 1 .", "05 65 .", "99 90 .", "23 0 .", "31 90 .", "97 Table 2: Ablation study of LNSR on each task, we report the mean evaluation scores and standard deviation and max value across 25 random seeds.", "generalization gap, and achieve higher evaluation score.", "The effect of narrowing generalization gap is also reflected in Figure 3 where we can see the higher evaluation accuracy and lower evaluation loss.", "We sample subsets from the two relatively larger datasets CoLA (8.5k training samples) and STS-B (7k training samples) with the sampling ratio of 0.15, 0.3 and 0.5.", "As is shown in Figure 4, fine-tuning with LNSR shows clear advantage on fewer training samples, suggesting LNSR can effectively promote the model's generalizability.", "We briefly discuss about the sensitivity to the position of noise injection as it is a pre-determined hyperparameter of our method.", "As is shown in Figure 5 in Appendix A, we observe that the performance of LNSR does not fluctuate much as the position of noise injection changes.", "All injection positions bring significant improvements over vanilla fine-tuning.", "Note that, with LNSR, noise injection to the lower layers usually leads to relatively higher accuracy and stability, implying that LNSR may be more effective when it affects both the lower and higher layers of the network.", "Our method is related to SMART (Jiang et al., 2020), FreeLB (Zhu et al., 2020a) and R3F (Agha-Figure", "(Agha-Figure 4: Mean evaluation score comparison of fine-tuning BERT with and w/o LNSR on fewer training samples of the CoLA and STS-B tasks. The mean values are calculated over 20 random seeds.", "janyan et al., 2020).", "As is mentioned before, most of these approaches employ adversarial training strategies to improve the robustness of BERT fine-tuing.", "SMART solves supremum by using an adversarial methodology to achieve the largest KL divergence with an (cid:15) -ball, FreeLB optimizes a direct adversarial loss L FreeLB ( ) = sup : | | (cid:15) L ( + ) through iterative gradient ascent steps, while R3F removes the adversarial nature of SMART and optimize the smoothness of the whole model directly.", "Compared with this sort of adversarial based algorithms, our method is easier to implement and provides a relatively rigorous theoretical guarantee.", "The design of layer-wise regularization is sensible that it exploits the characteristics of hierarchical representations in modern deep neural networks.", "Studies in knowledge distillation have shown similar experience that imitating through middle layer representations (Adriana et al., 2015; Zagoruyko and Komodakis, 2016) performs better than aligning the final outputs (Hinton et al., 2015).", "Moreover, LNSR allows us to use different regularization weights for different layers (we use fixed weight 1 on all layers in this paper).", "We will leave the exploitation in future work.", "In this paper, we propose the Layer-wise Noise Stability Regularization (LNSR) as a lightweight and effective method to improve the generalizability and stability when fine-tuning BERT on few training samples.", "Our proposed LNSR method is a general technique that improves model output stability while maintaining or improving the original performance.", "Furthermore, we provide a theoretically analysis of the relationship of our model to the Lipschitz continuity and Tikhonov regularizer.", "Extensive empirical results show that our proposed method can effectively improve the generalizability and stability of the BERT model.", "Hang Hua would like to thank Jeffries for supporting his research." ]
[ "other", "abstain", "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "result", "abstain", "objective", "result", "result", "result", "objective", "objective", "objective", "objective", "objective", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "objective", "objective", "method", "objective", "other" ]
[ "We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures.", "Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together.", "The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure.", "We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference and event similarity.", "The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%.", "Word representations are in ubiquitous usage across all areas of natural language processing (NLP) (Col-lobert et al., 2011; Chen and Manning, 2014; Mela-mud et al., 2016).", "Standard approaches rely on the distributional hypothesis (Harris, 1954; Schutze, 1993) and learn a single word vector space based on word co-occurrences in large text corpora (Mikolov et al., 2013b; Pennington et al., 2014; Bojanowski et al., 2017).", "This purely context-based training produces general word representations that capture the broad notion of semantic relatedness and con-flate a variety of possible semantic relations into a single space (Hill et al., 2015; Schwartz et al., 2015).", "However, this mono-faceted view of meaning is a well-known deficiency in NLP applications (Faruqui, 2016; Mrksic et al., 2017) as it fails to distinguish between fine-grained word associations.", "The space can be trained for a specific structure, such as SVO, and each word in a particular role will have a separate representation.", "Vectors for plausible SVO compositions will then be optimized to lie close together, as illustrated by Figure 1.", "For example, the verb vector study will be close to plausible subject vectors researcher or scientist and object vectors subject or art .", "For words that can occur as either subject or object, such as chicken , we obtain separate vectors for each role: one for chicken as subject and another for chicken as object .", "The resulting representations capture more detailed associations in addition to basic distributional similarity and can be used to construct representations for the whole SVO structure.", "To validate the effectiveness of our representation framework in language applications, we focus on modeling a prominent linguistic phenomenon: a general model of who does what to whom (Gell-Word Nearest Neighbours Subject memory dream, feeling, shadow, sense, moment, consciousness country state, nation, britain, china, uk, europe, government student pupil, participant, learner, candidate, trainee, child Verb see saw, view, expect, watch, notice, witness eat drink, consume, smoke, lick, swallow, cook, ingest avoid eliminate, minimise, anticipate, overcome, escape Object virus bacteria, infection, disease, worm, mutation, antibody beer ale, drink, pint, coffee, tea, wine, soup, champagne Joint SVO study (V) researcher (S), scientist (S), subject (O), art (O) eat (V) food (O), cat (S), dog (S) need (V) help (O), implementation (S), support (O) Table 1: Nearest neighbours in a function-specific space trained for the SVO structure.", "Mann and Ruhlen, 2011).", "In language, this event understanding information is typically captured by the SVO structures and, according to the cognitive science literature, is well aligned with how humans process sentences (McRae et al., 1997, 1998; Grefenstette and Sadrzadeh, 2011a; Kartsaklis and Sadrzadeh, 2014); it reflects the likely distinct storage and processing of objects (typically nouns) and actions (typically verbs) in the brain (Caramazza and Hillis, 1991; Damasio and Tranel, 1993).", "The quantitative results are reported on two established test sets for compositional event similarity (Grefenstette and Sadrzadeh, 2011a; Kartsaklis and Sadrzadeh, 2014).", "This task requires reasoning over SVO structures and quantifies the plausibility of the SVO combinations by scoring them against human judgments.", "We report consistent gains over established word representation methods, as well as over two recent tensor-based architectures (Tilk et al., 2016; Weber et al., 2018) which are designed specifically for solving the event similarity task.", "Furthermore, we investigate the generality of our approach by also applying it to other types of structures.", "We conduct additional experiments in a 4-role setting, where indirect objects are also modeled, along with a selectional preference evaluation of 2-role SV and VO relationships (Chambers and Jurafsky, 2010; Van de Cruys, 2014), yielding the highest scores on several established benchmarks.", "representation models such as skip-gram negative sampling", "sampling (SGNS) (Mikolov et al., 2013b,a), Glove (Pennington et al., 2014), or FastText (Bojanowski et al., 2017) induce a single word embedding space capturing broad semantic relatedness (Hill et al., 2015).", "For instance, SGNS makes use of two vector spaces for this purpose, which are referred to as A w and A c .", "SGNS has been shown to approximately correspond to factorising a matrix M = A w A Tc , where elements in M represent the co-occurrence strengths between words and their context words (Levy and Goldberg, 2014b).", "Both matrices represent the same vocabulary: therefore, only one of them is needed in practice to represent each word.", "Typically only A w is used while A c is discarded, or the two vector spaces are averaged to produce the final space.", "Levy and Goldberg (2014a) used dependency-based contexts, resulting in two separate vector spaces; however, the relation types were embedded into the vocabulary and the model was trained only in one direction.", "Camacho-Collados et al. (2019) proposed to learn separate sets of relation vectors in addition to standard word vectors and showed that such relation vectors encode knowledge that is often complementary to what is coded in word vectors.", "Rei et al. (2018) and Vulic and Mrksic (2018) described related task-dependent neural nets for mapping word embeddings into relation-specific spaces for scoring lexical entailment.", "In this work, we propose a task-independent approach and extend it to work with a variable number of relations.", "Neuroscience.", "Theories from cognitive linguistics and neuroscience reveal that single-space representation models fail to adequately reflect the organisation of semantic concepts in the human brain (i.e., semantic memory ): there seems to be no single semantic system indifferent to modalities or categories in the brain (Riddoch et al., 1988).", "Recent fMRI studies strongly support this proposition and suggest that semantic memory is in fact a widely distributed neural network (Davies et al., 2009; Huth et al., 2012; Pascual et al., 2015; Rice et al., 2015; de Heer et al., 2017), where sub-networks might activate selectively or more strongly for a particular function such as modality-specific or category-specific semantics (such as ob-jects/actions, abstract/concrete, animate/inanimate, animals, fruits/vegetables, colours, body parts, countries, flowers, etc.) (Warrington, 1975; Warrington and McCarthy, 1987; McCarthy and Warrington, 1988).", "This indicates a function-specific division of lower-level semantic processing.", "Single-space distributional word models have been found to partially correlate to these distributed brain activity patterns (Mitchell et al., 2008; Huth et al., 2012, 2016; Anderson et al., 2017), but fail to explain the full spectrum of fine-grained word associations humans are able to make.", "Our work has been partly inspired by this literature.", "Compositional Distributional Semantics.", "Partially motivated by similar observations, prior work frequently employs tensor-based methods for composing separate tensor spaces (Coecke et al., 2010): there, syntactic categories are often represented by tensors of different orders based on assumptions on their relations.", "One fundamental difference is made between atomic types (e.g., nouns) versus compositional types (e.g., verbs).", "Atomic types are seen as standalone: their meaning is independent from other types.", "On the other hand, verbs are compositional as they rely on their subjects and objects for their exact meaning.", "Due to this added complexity, the compositional types are often represented with more parameters than the atomic types, e.g., with a matrix instead of a vector.", "The goal is then to compose constituents into a semantic representation which is independent of the underlying grammatical structure.", "Therefore, a large body of prior work is concerned with finding appropriate composition functions (Grefenstette and Sadrzadeh, 2011a,b; Kartsaklis et al., 2012; Milajevs et al., 2014) to be applied on top of word representations.", "Since this approach represents different syntactic structures with tensors of varying dimensions, comparing syntactic constructs is not straightforward.", "This compositional approach thus struggles with transferring the learned knowledge to downstream tasks.", "State-of-the-art compositional models (Tilk et al., 2016; Weber et al., 2018) combine similar tensor-based approaches with neural training, leading to task-specific compositional solutions.", "While effective for a task at hand, the resulting models rely on a large number of parameters and are not robust: we observe deteriorated performance on other related compositional tasks, as shown in Section 6. Multivariable (SVO) Structures in NLP.", "Modeling SVO-s is important for tasks such as compositional event similarity using all three variables, and thematic fit modeling based on SV and VO associations separately.", "Traditional solutions are typically based on clustering of word co-occurrence counts from a large corpus (Baroni and Lenci, 2010; Greenberg et al., 2015a,b; Sayeed et al., 2016; Emerson and Copestake, 2016).", "More recent solutions combine neural networks with tensor-based methods.", "Van de Cruys (2014) present a feed-forward neural net trained to score compositions of both two and three groups with a max-margin loss.", "Grefenstette and Sadrzadeh (2011a,b); Kartsaklis and Sadrzadeh (2014); Milajevs et al. (2014); Edelstein and Reichart (2016) employ tensor compositions on standard single-space word vectors.", "Hashimoto and Tsuruoka (2016) discern compositional and non-compositional phrase embeddings starting from HPSG-parsed data.", "Objectives.", "We propose to induce function-specific vector spaces which enable a better model of associations between concepts and consequently improved event representations by encoding the relevant information directly into the parameters for each word during training.", "Word vectors offer several advantages over tensors: a large reduction in parameters and fixed dimensionality across concepts.", "This facilitates their reuse and transfer across different tasks.", "For this reason, we find our multidirectional training to deliver good performance: the same function-specific vector space achieves state-of-the-art scores across multiple related tasks, previously held by task-specific models.", "Our goal is to model the mutual associations (co-occurrences) between N groups of words, where each group represents a particular role, such as subject or object in an SVO structure.", "We induce an embedding matrix R | V i | d for every group i = 1 , . . . , N , where | V i | corresponds to the vocabulary size of the i -th group and the group vocabularies can partially overlap.", "For consistency, the vector dimensionality d is kept equal across all variables.", "Multiple Groups.", "Without loss of generality we present a model which creates a function-specific vector space for N = 3 groups, referring to those groups as A , B , and C .", "Note that the model is not limited to this setup, as we show later in Section 6. A , B and C might be interrelated phenomena, and we aim for a model which can reliably score the plausibility of combining three vectors ( (cid:126)A , (cid:126)B , (cid:126)C ) taken from this space.", "In addition to the full joint prediction, we aim for any two vector combinations", "( (cid:126)A (cid:126)B , (cid:126)B (cid:126)C , (cid:126)C (cid:126)A ) to have plausible scores of their own.", "Observing relations between words inside single-group subspaces ( A , B , or C ) is another desirable feature.", "Directionality.", "To design a solution with the necessary properties, we first need to consider the influ-ence of prediction directionality in representation learning.", "A representation model such as SGNS (Mikolov et al., 2013a,b) learns two vectors for each word in one large vocabulary: one vector on the input side (word vector), another on the output side (context vector), with only the input word vectors being commonly used (Levy and Goldberg, 2014b).", "Here, we require several distinct vocabularies (i.e., three, one each for group A , B , and C ).", "Instead of context vectors, we train the model to predict words from another group, hence directionality is an important consideration.", "We find that prediction directionality has a strong impact on the quality of the induced representations, and illustrate this effect on an example that is skewed extremely to one side: an n:1 assignment case.", "Let us assume data of two groups, where each word of group A 1 is assigned to exactly one of three clusters in group B 3 .", "We expect a function-specific word vector space customised for this purpose to show three clearly separated clusters.", "Figure 2 visu-alises obtained representations.", "1 Figure 2a plots the vector spaces when we use words on the input side of the model and predict the cluster: A 1 B 3 ; 1 We train on 10K randomly selected German nouns ( A 1 ) and their corresponding noun gender ( B 3 ) from a GermanEnglish dictionary obtained from dict.cc , and train a 25-dim model for 24 epochs.", "Points in the figures show 1K words which were randomly selected from the 10K training vocabulary.", "The embedding spaces have been mapped to 2D with tSNE (van der Maaten and Hinton, 2012).", "this can be seen as n:1 assignment.", "In the opposite direction ( B 3 A 1 , 1:n assignment) we do not observe the same trends (Figure 2b).", "Representations for other and more complex phenomena suffer from the same issue.", "For example, the verb eat can take many arguments corresponding to various food items such as pizza , beans , or kimchi .", "A more specific verb such as embark might take only a few arguments such as journey , whereas journey might be fairly general and can co-occur with many other verbs themselves.", "We thus effectively deal with an n:m assignment case, which might be inclined towards 1:n or n:1 entirely depending on the words in question.", "Therefore, it is unclear whether one should rather construct a model predicting verb object or object verb .", "We resolve this fundamental design question by training representations in a multidirectional way with a joint loss function.", "Figure 2c shows how this method learns accurately clustered representations without having to make directionality assumptions.", "The multidirectional neural representation learning model takes a list of N groups of words ( G 1 , G 2 , . . . , GN ) , factorises it into all possible group-to-group sub-models, and trains them jointly by combining objectives based on skip-gram negative sampling (Mikolov et al., 2013a,b).", "We learn a joint function-specific word vector space by using sub-networks that each consume one group G i on the input side and predict words from a second group G j on the output side, i, j = 1 , 2 . . . , N ; i (cid:54) = j .", "All sub-network losses are tied into a single joint loss and all groups G 1 , . . . , G n are shared between the sub-networks.", "Sub-Network Architecture.", "We first factorise groups into sub-networks, representing all possible directions of prediction.", "Two groups would lead to two sub-networks A B and B A ; three groups lead to six sub-networks.", "Similar to (Mikolov et al., 2013a,b), we calculate the dot-product between two word vectors to quantify their association.", "For instance, the sub-network A B computes its prediction: PA B = ( (cid:126)a B Te + (cid:126)b ab ) (1) where (cid:126)a is a word vector from the input group A , B e is the word embedding matrix for the target group B , (cid:126)b ab is a bias vector, and is the sigmoid function.", "The loss of each sub-network is computed using cross-entropy between this prediction and the correct labels: LA B = cross entropy ( PA B , LA B ) .", "(2) LA B are one-hot vectors corresponding to the correct predictions.", "We leave experiments with more sophisticated sub-networks for future work.", "Synchronous Joint Training.", "We integrate all sub-networks into one joint model via two following mechanisms: (1) Shared Parameters.", "The three embedding matrices referring to groups A , B and C are shared across all sub-networks.", "That is, we train one matrix per group, regardless of whether it is being employed at the input or the output side of any sub-network.", "This leads to a substantial reduction in the model size.", "For example, with a vocabulary of 50 , 000 words and 25 -dimensional vectors we work only with 1.35M parameters.", "Comparable models for the same tasks are trained with much larger sets of parameters: 26M or even up to 179M when not factorised (Tilk et al., 2016).", "Our modeling approach thus can achieve more that 95% reduction in the number of parameters.", "(2) Joint Loss.", "We also train all sub-networks with a single joint loss and a single backward pass.", "We refer to this manner of joining the losses as synchronous : it synchronises the backward pass of all sub-networks.", "This could also be seen as a form of multi-task learning, where each sub-network optimises the shared parameters for a different task (Ruder, 2017).", "In practice, we perform a forward pass in each direction separately, then join all subnetwork cross-entropy losses and backpropagate this joint loss through all sub-networks in order to update the parameters.", "The different losses are combined using addition: L = (cid:88) L (3) where iterates over all the possible sub-networks, L is the corresponding loss from one network, and L the overall joint loss.", "When focusing on the SVO structures, the model will learn one joint space for the three groups of embeddings (one for S , V and O ).", "The 6 subnetworks all share parameters and optimization is performed using the joint loss: L = LS V + LV S + LV O + LO V + LS O + LO S (4) The vectors from the induced function-specific space can then be composed by standard composition functions (Milajevs et al., 2014) to yield event representations (Weber et al., 2018), that is, representations for the full SVO structure.", "Preliminary Task: Pseudo-Disambiguation.", "In the first evaluation, we adopt a standard pseudo-disambiguation task from the selectional preference literature (Rooth et al., 1999; Bergsma et al., 2008; Erk et al., 2010; Chambers and Jurafsky, 2010; Van de Cruys, 2014).", "For the three-group (S-V-O) case, the task is to score a true triplet (i.e., the (S-V-O) structure attested in the corpus) above all corrupted triplets (S-V'-O), (S'-V-O), (S-V-O'), where S', V' and O' denote subjects and objects randomly drawn from their respective vocabularies.", "Similarly, for the two-group setting, the task is to express a higher preference towards the attested pairs (V-O) or (S-V) over corrupted pairs (V-O') or (S'-V).", "We report accuracy scores, i.e., we count all items where score(true) > score(corrupted) .", "This simple pseudo-disambiguation task serves as a preliminary sanity check: it can be easily applied to a variety of training conditions with different variables.", "However, as pointed out by Chambers and Jurafsky (2010), the performance on this task is strongly influenced by a number of factors such as vocabulary size and the procedure for constructing corrupted examples.", "Therefore, we additionally evaluate our models on a number of other established datasets (Sayeed et al., 2016).", "Event Similarity (3 Variables: SVO).", "A standard task to measure the plausibility of SVO structures (i.e., events ) is event similarity (Grefenstette and Sadrzadeh, 2011a; Weber et al., 2018): the goal is to score similarity between SVO triplet pairs and correlate the similarity scores to human-elicited similarity judgements.", "Robust and flex-ible event representations are important to many core areas in language understanding such as script learning, narrative generation, and discourse understanding (Chambers and Jurafsky, 2009; Pichotta and Mooney, 2016; Modi, 2016; Weber et al., 2018).", "We evaluate event similarity on two benchmarking data sets: GS199 (Grefenstette and Sadrzadeh, 2011a) and KS108 (Kartsaklis and Sadrzadeh, 2014).", "GS199 contains 199 pairs of SV O triplets/events.", "In the GS199 data set only the V is varied, while S and O are fixed in the pair: this evaluation prevents the model from relying only on simple lexical overlap for similarity computation.", "2 KS108 contains 108 event pairs for the same task, but is specifically constructed without any lexical overlap between the events in each pair.", "For this task function-specific representations are composed into a single event representation/vector .", "Following prior work, we compare cosine similarity of event vectors to averaged human scores and report Spearman's correlation with human scores.", "We compose the function-specific word vectors into event vectors using simple addition and multiplication, as well as more sophisticated compositions from prior work (Milajevs et al., 2014, inter alia ).", "The summary is provided in Table 4.", "Thematic-Fit Evaluation (2 Variables: SV and VO).", "Similarly to the 3-group setup, we also evaluate the plausibility of SV and V O pairs separately in the 2-group setup.", "The selectional preference evaluation (Sayeed et al., 2016), also referred to as thematic-fit , quantifies the extent to which a noun fulfils the selectional preference of a verb given a role (i.e., agent:S, or patient:O) (McRae et al., 1997).", "We evaluate our 2-group function-specific 2 For instance, the phrases 'people run company' and 'people operate company' have a high similarity score of 6 .", "53 , whereas 'river meet sea' and 'river satisfy sea' have been given a low score of 1 .", "84 .", "spaces on two standard benchmarks:", "1) MST1444 (McRae et al., 1998) contains 1,444 word pairs where humans provided thematic fit ratings on a scale from 1 to 7 for each noun to score the plausibility of the noun taking the agent role, and also taking the patient role.", "3 2) PADO414 (Pado, 2007) is similar to MST1444, containing 414 pairs with human thematic fit ratings, where role-filling nouns were selected to reflect a wide distribution of scores for each verb.", "We compute plausibility by simply taking the cosine similarity between the verb vector (from the V space) and the noun vector from the appropriate function-specific space ( S space for agents; O space for patients).", "We again report Spearman's correlation scores.", "3 Using an example from Sayeed et al. (2016), the human participants were asked how common is it for a { snake, monster, baby, cat } to frighten someone/something (agent role) as opposed to how common is it for a { snake, monster, baby, cat } to be frightened by someone/something (patient role).", "Training Data.", "We parse the ukWaC corpus (Ba-roni et al., 2009) and the British National Corpus (BNC) (Leech, 1992) using the Stanford Parser with Universal Dependencies v1.4 (Chen and Manning, 2014; Nivre et al., 2016) and extract co-occurring subjects, verbs and objects.", "All words are lowercased and lemmatised, and tuples containing non-alphanumeric characters are excluded.", "We also remove tuples with (highly frequent) pronouns as subjects, and filter out training examples containing words with frequency lower than 50.", "After preprocessing, the final training corpus comprises 22M SVO triplets in total.", "Table 2 additionally shows training data statistics when training in the 2-group setup (SV and VO) and in the 4-group setup (when adding indirect objects: SVO+iO).", "We report the number of examples in training and test sets, as well as vocabulary sizes and most frequent words across different categories.", "Hyperparameters.", "We train with batch size 128, and use Adam for optimisation (Kingma and Ba, 2015) with a learning rate 0.001.", "All gradients are clipped to a maximum norm of 5.0.", "All models were trained with the same fixed random seed.", "We train 25 -dimensional vectors for all setups (2/3/4 groups), and we additionally train 100 -dimensional vectors for the 3-group (SVO) setup.", "Pseudo-Disambiguation.", "Accuracy scores on the pseudo-disambiguation task in the 2/3/4-group setups are summarised in Table 3.", "4 We find consistently high pseudo-disambiguation scores ( > 0.94) across all setups.", "In a more detailed analysis, we find especially the prediction accuracy of verbs to be high: we report accuracy of 96 .", "9% for the 3-group SVO model.", "The vocabulary size for verbs is typically lowest (see Table 2), which presumably makes predictions into this direction easier.", "In summary, as mentioned in Section 5, this initial evaluation already suggests that our model is able to capture associations between interrelated groups which are instrumental to modeling SVO structures and composing event representations.", "Event Similarity.", "We now test correlations of SVO-based event representations composed from a 4 We also provide baseline scores taken from prior work, but the reader should be aware that the scores may not be directly comparable due to the dependence of this evaluation on factors such as vocabulary size and sampling of corrupted examples (Chambers and Jurafsky, 2010; Sayeed et al., 2016).", "function-specific vector space (see Table", "4) to human scores in the event similarity task.", "A summary of the main results is provided in Table 5. We also report best baseline scores from prior work.", "The main finding is that our model based on function-specific word vectors outperforms previous state-of-the-art scores on both datasets.", "It is crucial to note that different modeling approaches and config-urations from prior work held previous peak scores on the two evaluation sets.", "5 Interestingly, by relying only on the representations from the V subspace (i.e., by completely discarding the knowledge stored in S and O vectors), we can already obtain reasonable correlation scores.", "This is an indicator that the verb vectors indeed stores some selectional preference information as designed, i.e., the information is successfully encoded into the verb vectors themselves.", "on two thematic-fit evaluation data sets are summarised in Table 6. We also report results with", "5 Note the two tasks are inherently different.", "KS108 requires similarity between plausible triplets.", "Using the network score directly (which is a scalar, see Table", "4) is not suitable for KS108 as all KS108 triplets are plausible and scored highly.", "This is reflected in the results in Table 5. representative baseline models for the task:", "1) a TypeDM-based model (Baroni and Lenci, 2010), further improved by Greenberg et al. (2015a,b) ( G15 ), and", "2) current state-of-the-art tensor-based neural model by Tilk et al. (2016) ( TK16 ).", "We find that vectors taken from the model trained in the joint 3-group SVO setup perform on a par with state-of-the-art models also in the 2-group evaluation on SV and VO subsets.", "Vectors trained explicitly in the 2-group setup using three times more data lead to substantial improvements on PADO414.", "As a general finding, our function-specific approach leads to peak performance on both data sets.", "The results are similar with 25-dim SVO vectors.", "Our model is also more light-weight than the baselines: we do not require a full (tensor-based) neural model, but simply function-specific word vectors to reason over thematic fit.", "To further verify the importance of joint multidirectional training, we have also compared our function-specific vectors against standard single-space word vectors (Mikolov et al., 2013b).", "The results indicate the superiority of function-specific spaces: respective correlation scores on MST1444 and PADO414 are 0.28 and 0.41 (vs 0.34 and 0.58 with our model).", "It is interesting to note that we obtain state-of-the-art scores calculating cosine similarity of vectors taken from two groups found in the joint space .", "This finding verifies that the model does indeed learn a joint space where co-occurring words from different groups lie close to each other.", "Qualitative Analysis.", "We retrieve nearest neighbours from the function-specific ( S , V , O ) space, shown in Figure 1.", "We find that the nearest neighbours indeed reflect the relations required to model the SVO structure.", "For instance, the closest sub-jects/agents to the verb eat are cat and dog .", "The closest objects to need are three plausible nouns: help , support , and assistance .", "As the model has information about group membership, we can also filter and compare nearest neighbours in single-group subspaces.", "For example, we find subjects similar to the subject memory are dream and feeling , and objects similar to beer are ale and pint .", "Model Variants.", "We also conduct an ablation study that compares different model variants.", "The variants are constructed by varying", "1) the training regime: asynchronous ( async ) vs synchronous ( sync ), and", "2) the type of parameter sharing: training on separate parameters for each sub-network Setup Baselines Ours SVO SV-VO Dataset Eval G15 TK16 (d=100) (d=25) SV 0.36 -0.37 0.31 MST1444 VO 0.34 -0.35 0.35 full 0.33 0.38 0.36 0.34 SV 0.54 -0.38 0.55 PADO414 VO 0.53 -0.54 0.61 full 0.53 0.52 0.45 0.58 Table 6: Results on the 2-variable thematic-fit evaluation.", "( sep ) 6 or training on shared variables ( shared ).", "In the asynchronous setup we update the shared parameters per sub-network directly based on their own loss, instead of relying on the joint synchronous loss as in Section 3.", "Table 7 shows the results with the model variants, demonstrating that both aspects (i.e., shared parameters and synchronous training) are important to reach improved overall performance.", "We reach the peak scores on all evaluation sets using the sync + shared variant.", "We suspect that asynchronous training deteriorates performance because each sub-network overwrites the updates of other subnetworks as their training is not tied through a joint loss function.", "On the other hand, the synchronous training regime guides the model towards making updates that can benefit all sub-networks.", "We presented a novel multidirectional neural framework for learning function-specific word representations, which can be easily composed into multiword representations to reason over event similarity and thematic fit.", "We induced a joint vector space 6 With separate parameters we merge vectors from dupli-cate vector spaces by non-weighted averaging.", "in which several groups of words (e.g., S, V, and O words forming the SVO structures) are represented while taking into account the mutual associations between the groups.", "We found that resulting function-specific vectors yield state-of-the-art results on established benchmarks for the tasks of estimating event similarity and evaluating thematic fit, previously held by task-specific methods.", "In future work we will investigate more sophisticated neural (sub-)networks within the proposed framework.", "We will also apply the idea of function-specific training to other interrelated linguistic phenomena and other languages, probe the usefulness of function-specific vectors in other language tasks, and explore how to integrate the methodology with sequential models.", "The pre-trained word vectors used in this work are available online at: https://github.com/cambridgeltl/fs-wrep .", "This work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909) awarded to Anna Korhonen." ]
[ "result", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "other", "abstain", "abstain", "abstain", "objective", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "abstain", "abstain", "result", "objective", "objective", "other", "other" ]
[ "While cross-lingual techniques are finding increasing success in a wide range of Natural Language Processing tasks, their application to Semantic Role Labeling (SRL) has been strongly limited by the fact that each language adopts its own linguistic formalism, from PropBank for English to AnCora for Spanish and PDT-Vallex for Czech, inter alia.", "In this work, we address this issue and present a unified model to perform cross-lingual SRL over heterogeneous linguistic resources.", "Our model implicitly learns a high-quality mapping for different formalisms across diverse languages without resorting to word alignment and/or translation techniques.", "We find that, not only is our cross-lingual system competitive with the current state of the art but that it is also robust to low-data scenarios.", "Most interestingly, our unified model is able to annotate a sentence in a single forward pass with all the inventories it was trained with, providing a tool for the analysis and comparison of linguistic theories across different languages.", "We release our code and model at https://github.com/ SapienzaNLP/unify-srl .", "Semantic Role Labeling (SRL) a long-standing open problem in Natural Language Processing (NLP) and a key building block of language understanding (Navigli, 2018) is often defined as the task of automatically addressing the question Who did what to whom, when, where, and how? (Gildea and Jurafsky, 2000; Mrquez et al., 2008).", "While the need to manually engineer and fine-tune complex feature templates severely limited early work (Zhao et al., 2009), the great success of neural networks in NLP has resulted in impressive progress in SRL, thanks especially to the ability of recurrent networks to better capture relations over sequences (He et al., 2017; Marcheggiani et al., 2017).", "Owing to the recent wide availability of robust multilingual representations, such as multilingual word embeddings (Grave et al., 2018) and multilingual language models (Devlin et al., 2019; Conneau et al., 2020), researchers have been able to shift their focus to the development of models that work on multiple languages (Cai and Lapata, 2019b; He et al., 2019; Lyu et al., 2019).", "A robust multilingual representation is nevertheless just one piece of the puzzle: a key challenge in multilingual SRL is that the task is tightly bound to linguistic formalisms (Mrquez et al., 2008) which may present significant structural differences from language to language (Hajic et al., 2009).", "In the recent literature, it is standard practice to sidestep this issue by training and evaluating a model on each language separately (Cai and Lapata, 2019b; Chen et al., 2019; Kasai et al., 2019; He et al., 2019; Lyu et al., 2019).", "Although this strategy allows a model to adapt itself to the characteristics of a given formalism, it is burdened by the non-negligible need for training and maintaining one model instance for each language, resulting in a set of monolingual systems.", "Instead of dealing with heterogeneous linguistic theories, another line of research consists in actively studying the effect of using a single formalism across multiple languages through annotation projection or other transfer techniques (Akbik et al., 2015, 2016; Daza and Frank, 2019; Cai and Lapata, 2020; Daza and Frank, 2020).", "However, such approaches often rely on word aligners and/or automatic translation tools which may introduce a considerable amount of noise, especially in low-resource languages.", "More importantly, they rely on the strong assumption that the linguistic formalism of choice, which may have been developed with a specific language in mind, is also suitable for other languages.", "In this work, we take the best of both worlds and propose a novel approach to cross-lingual SRL.", "Our contributions can be summarized as follows: We introduce a unified model to perform cross-lingual SRL with heterogeneous linguistic resources; We find that our model is competitive against state-of-the-art systems on all the 6 languages of the CoNLL-2009 benchmark; We show that our model is robust to low-resource scenarios, thanks to its ability to generalize across languages; We probe our model and demonstrate that it implicitly learns to align heterogeneous linguistic resources; We automatically build and release a cross-lingual mapping that aligns linguistic formalisms from diverse languages.", "We hope that our unified model will further advance cross-lingual SRL and represent a tool for the analysis and comparison of linguistic theories across multiple languages.", "End-to-end SRL.", "The SRL pipeline is usually divided into four steps: predicate identification, predicate sense disambiguation, argument identification, and argument classification.", "While early research focused its efforts on addressing each step individually (Xue and Palmer, 2004; Bjrkelund et al., 2009; Zhao et al., 2009), recent work has successfully demonstrated that tackling some of these subtasks jointly with multitask learning (Caruana, 1997) is beneficial.", "In particular, He et al. (2018) and, subsequently, Cai et al. (2018), Li et al. (2019) and Conia et al. (2020), indicate that predicate sense signals aid the identification of predicate-argument relations.", "Therefore, we follow this line and propose an end-to-end system for cross-lingual SRL.", "Multilingual SRL.", "Current work in multilingual SRL revolves mainly around the development of novel neural architectures, which fall into two broad categories, syntax-aware and syntax-agnostic ones.", "On one hand, the quality and diversity of the information encoded by syntax is an enticing prospect that has resulted in a wide range of contributions: Marcheggiani and Titov (2017) made use of Graph Convolutional Networks (GCNs) to better capture relations between neighboring nodes in syntactic dependency trees; Strubell et al. (2018) demonstrated the effectiveness of linguistically-informed self-attention layers in SRL; Cai and Lapata (2019b) observed that syntactic dependencies often mirror semantic relations and proposed a model that jointly learns to perform syntactic dependency parsing and SRL; He et al. (2019) devised syntax-based pruning rules that work for multiple languages.", "On the other hand, the complexity of syntax and the noisy performance of automatic syntactic parsers have deterred other researchers who, instead, have found methods to improve SRL without syntax: Cai et al. (2018) took advantage of an attentive biaffine layer (Dozat and Manning, 2017) to better model predicate-argument relations; Chen et al. (2019) and Lyu et al. (2019) obtained remarkable results in multiple languages by capturing predicate-argument interactions via capsule networks and iteratively refining the sequence of output labels, respectively; Cai and Lapata (2019a) proposed a semi-supervised approach that scales across different languages.", "While we follow the latter trend and develop a syntax-agnostic model, we underline that both the aforementioned syntax-aware and syntax-agnostic approaches suffer from a significant drawback: they require training one model instance for each language of interest.", "Their two main limitations are, therefore, that", "i) the number of trainable parameters increases linearly with the number of languages, and", "ii) the information available in one language cannot be exploited to make SRL more robust in other languages.", "In contrast, one of the main objectives of our work is to develop a unified cross-lingual model which can mitigate the paucity of training data in some languages by exploiting the information available in other, resource-richer languages.", "Cross-lingual SRL.", "A key challenge in performing cross-lingual SRL with a single unified model is the dissimilarity of predicate sense and semantic role inventories between languages.", "For example, the multilingual dataset distributed as part of the CoNLL-2009 shared task (Hajic et al., 2009) adopts the English Proposition Bank (Palmer et al., 2005) and NomBank (Meyers et al., 2004) to annotate English sentences, the Chinese Proposition Bank (Xue and Palmer, 2009) for Chinese, the AnCora (Taul et al., 2008) predicate-argument structure inventory for Catalan and Spanish, the German Proposition Bank which, differently from the other PropBanks, is derived from FrameNet (Hajic et al., 2009), and PDT-Vallex (Hajic et al., 2003) for Czech.", "Many of these inventories are not aligned with each other as they follow and implement different linguistic theories which, in turn, may pose different challenges.", "Pad and Lapata (2009), and Akbik et al. (2015, 2016) worked around these issues by making the English PropBank act as a universal predicate sense and semantic role inventory and projecting PropBank-style annotations from English onto non-English sentences by means of word alignment techniques applied to parallel corpora such as Eu-roparl (Koehn, 2005).", "These efforts resulted in the creation of the Universal PropBank, a multilingual collection of semi-automatically annotated corpora for SRL, which is actively in use today to train and evaluate novel cross-lingual methods such as word alignment techniques (Aminian et al., 2019).", "In the absence of parallel corpora, annotation projection techniques can still be applied by automatically translating an annotated corpus and then projecting the original labels onto the newly created silver corpus (Daza and Frank, 2020; Fei et al., 2020), whereas Daza and Frank (2019) have recently found success in training an encoder-decoder architecture to jointly tackle SRL and translation.", "While the foregoing studies have greatly advanced the state of cross-lingual SRL, they suffer from an intrinsic downside: using translation and word alignment techniques may result in a considerable amount of noise, which automatically puts an upper bound to the quality of the projected labels.", "Moreover, they are based on the strong assumption that the English PropBank provides a suitable formalism for non-English languages, and this may not always be the case.", "Among the numerous studies that adopt the English PropBank as a universal predicate-argument structure inventory for cross-lingual SRL, the work of Mulcaire et al. (2018) stands out for proposing a bilingual model that is able to perform SRL according to two different inventories at the same time, although with significantly lower results compared to the state of the art at the time.", "With our work, we go beyond current approaches to cross-lingual SRL and embrace the diversity of the various representations made available in different languages.", "In particular, our model has three key advantages:", "i) it does not rely on word alignment or machine translation tools;", "ii) it learns to perform SRL with multiple linguistic inventories;", "iii) it learns to link resources that would otherwise be disconnected from each other.", "In the wake of recent work in SRL, our model falls into the broad category of end-to-end systems as it learns to jointly tackle predicate identification, predicate sense disambiguation, argument identification and argument classification.", "The model architecture can be roughly divided into the following components: A universal sentence encoder whose parameters are shared across languages and which produces word encodings that capture predicate-related information (Section 3.2); A universal predicate-argument encoder whose parameters are also shared across languages and which models predicate-argument relations (Section 3.3); A set of language-specific decoders which indicate whether words are predicates, select the most appropriate sense for each predicate, and assign a semantic role to every predicate-argument couple, according to several different SRL inventories (Section 3.4).", "Unlike previous work, our model does not require any preexisting cross-resource mappings, word alignment techniques, translation tools, other annotation transfer techniques, or parallel data, to perform high-quality cross-lingual SRL, as it relies solely on implicit cross-lingual knowledge transfer.", "Pretrained language models such as ELMo (Peters et al., 2018), BERT (Devlin et al., 2019) and XLM-RoBERTa (Conneau et al., 2020), inter alia , are becoming the de facto input representation method, thanks to their ability to encode vast amounts of knowledge.", "Following recent studies (Hewitt and Manning, 2019; Kuznetsov and Gurevych, 2020; Conia and Navigli, 2020), which show that different layers of a language model capture different syntactic and semantic characteristics, our model builds a contextual representation for an input word by concatenating the corresponding hidden states of the four top-most inner layers of a language model.", "More formally, given a word w i in a sentence w = (cid:104) w 0 , w 1 , . . . , w i , . . . , w n 1 (cid:105) of n words and its hidden state h ki = l k ( w i | w ) from the k -th inner layer l k of a language model with K layers, the model computes the word encoding e i as follows: h i = h Ki h K 1 i h K 2 i h K 3 i e i = Swish( W w h i + b w ) where x y is the concatenation of the two vectors x and y , and Swish( x ) = x sigmoid( x ) is a non-linear activation which was found to produce smoother gradient landscapes than the more traditional ReLU (Ramachandran et al., 2018).", "Expanding on the seminal intuition of Fillmore (1968), who suggests the existence of deep semantic relations between a predicate and other sentential constituents, we argue that such semantic relations may be preserved across languages.", "With this reasoning in mind, we devise a universal sentence encoder whose parameters are shared across languages.", "Intuitively, the aim of our universal sentence encoder is to capture sentence-level information that is not formalism-specific and spans across languages, such as information about predicate positions and predicate senses.", "In our case, we implement this universal sentence encoder as a stack of BiLSTM layers (Hochreiter and Schmidhu-ber, 1997), similarly to Marcheggiani et al. (2017), Cai et al. (2018) and He et al. (2019), with the difference that we concatenate the output of each layer to its input in order to mitigate the problem of vanishing gradients.", "More formally, given a sequence of word encodings e = (cid:104) e 0 , e 1 , . . . , e n 1 (cid:105) , the model computes a sequence of timestep encodings t as follows: t ji = (cid:40) e i if j = 0 t j 1 i BiLSTM ji ( t j 1 ) otherwise t = (cid:104) t K (cid:48) 0 , t K (cid:48) 1 , . . . , t K (cid:48) n 1 (cid:105) where BiLSTM ji ( ) is the i -th timestep of the j -th BiLSTM layer and K (cid:48) is the total number of layers in the stack.", "Starting from each timestep encoding t i , the model produces a predicate representation p i , which captures whether the corresponding word w i is a predicate, and a sense representation s i which encodes information about the sense of a predicate at position i : p i = Swish( W p t i + b p ) s i = Swish( W s t i + b s ) We stress that the vector representations obtained for each timestep, each predicate and each sense lie in three spaces that are shared across the languages and formalisms used to perform SRL.", "In the same vein, and for the same reasoning that motivated the design of the above universal sentence encoder, our model includes a universal predicate-argument encoder whose parameters are also shared across languages.", "The objective of this second encoder is to capture the relations between each predicate-argument couple that appears in a sentence, independently of the input language.", "Similarly to the universal sentence encoder, we implement this universal predicate-argument encoder as a stack of BiLSTM layers.", "More formally, let w p be a predicate in the input sentence w = (cid:104) w 0 , w 1 , . . . , w p , . . . , w n 1 (cid:105) , then the model computes a sequence of predicate-specific argument encodings a as follows: a ji = (cid:40) t p t i if j = 0 a j 1 i BiLSTM ji ( a j 1 ) otherwise a = (cid:104) a K (cid:48)(cid:48) 0 , a K (cid:48)(cid:48) 1 , . . . , a K (cid:48)(cid:48) n 1 (cid:105) where t i is the i -th timestep encoding from the universal sentence encoder and K (cid:48)(cid:48) is the total number of layers in the stack.", "Starting from each predicate-specific argument encoding a i , the model produces a semantic role representation r i for word w i : r i = Swish( W r a i + b r ) Similarly to the predicate and sense representations p and s , since the predicate-argument encoder is one and the same for all languages, the semantic role representation r obtained must draw upon cross-lingual information in order to abstract from language-specific peculiarities.", "The aforementioned predicate encodings p , sense encodings s and semantic role encodings r are shared across languages, forcing the model to learn from semantics rather than from surface-level features such as word order, part-of-speech tags and syntactic rules, all of which may differ from language to language.", "Ultimately, however, we want our model to provide semantic role annotations according to an existing predicate-argument structure inventory, e.g., PropBank, AnCora, or PDT-Vallex.", "Our model, therefore, includes a set of linear decoders that indicate whether a word w i is a predicate, what the most appropriate sense for a predicate w p is, and what the semantic role of a word w r with respect to a specific predicate w p is, for each language l : p ( w i | l ) = W p | l p i + b p | l s ( w p | l ) = W s | l s i + b s | l r ( w r | w p , l ) = W r | l r i + b r | l Although we could have opted for more complex decoding strategies, in our case linear decoders have two advantages: 1) they keep the language-specific part of the model as simple as possible, pushing the model into learning from its universal encoders; 2) they can be seen as linear probes, providing an insight into the quality of the cross-lingual knowledge that the model can capture.", "The model is trained to jointly minimize the sum of the categorical cross-entropy losses on predicate identification, predicate sense disambiguation and argument identification/classification over all the languages in a multitask learning fashion.", "More formally, given a language l and the corresponding predicate identification loss L p | l , predicate sense disambiguation loss L s | l and argument identifica-tion/classification loss L r | l , the cumulative loss L is: L = (cid:88) l L (cid:0) L p | l + L s | l + L r | l (cid:1) where L is the set of languages and the corresponding formalisms in the training set.", "We evaluate our model in dependency-based multilingual SRL.", "The remainder of this Section describes the experimental setup (Section 4.1), provides a brief overview of the multilingual dataset we use for training, validation and testing (Sec-tion 4.2), and shows the results obtained on each language (Section 4.3).", "We implemented the model in PyTorch 1 and PyTorch Lightning 2 , and used the pretrained language models for multilingual BERT (m-BERT) and XLM-RoBERTa (XLM-R) made available by the Transformers library (Wolf et al., 2020).", "We 1 https://pytorch.org 2 https://www.pytorchlightning.ai trained each model configuration for 30 epochs using Adam (Kingma and Ba, 2015) with a slanted triangle learning rate scheduling strategy which linearly increases the learning rate for 1 epoch and then linearly decreases the value for 15 epochs.", "We did not perform hyperparameter tuning and opted instead for standard values used in the literature; we provide more details about our model configuration and its hyperparameter values in Appendix A. In the remainder of this Section, we report the F 1 scores of the best models selected according to the highest F 1 score obtained on the validation set at the end of a training epoch.", "3 4.2 Dataset To the best of our knowledge, the dataset provided as part of the CoNLL-2009 shared task (Hajic et al., 2009) is the largest and most diverse collection of human-annotated sentences for multilingual SRL.", "It comprises 6 languages 4 , namely, Catalan, Chinese, Czech, English, German and Spanish, which belong to different linguistic families and feature significantly varying amounts of training samples, from 400K predicate instances in Czech to only 17K in German; we provide an overview of the statistics of each language in Appendix B. CoNLL-2009 is the ideal testbed for evaluating the ability of our unified model to generalize across heterogeneous resources since each language adopts its own linguistic formalism, from English PropBank to PDT-Vallex, from Chinese PropBank to AnCora.", "We also include VerbAtlas (Di Fabio et al., 2019), a recently released resource for SRL 5 , with the aim of understanding whether our model can learn to align inventories that are based on distant linguistic theories; indeed, VerbAtlas is based on clustering WordNet synsets into frames that share similar semantic behavior, whereas PropBank-based resources enumerate and define the possible senses of a lexeme.", "As a final note, we did not evaluate our model on Universal PropBank 6 since", "i) it was semiautomatically generated through annotation pro-3 Hereafter, all the results of our experiments are computed by the official scorer of the CoNLL-2009 shared task, available at https://ufal.mff.cuni.cz/conll2009-st/scorer.html .", "4 The CoNLL-2009 shared task originally included a seventh language, Japanese, which is not available anymore on LDC due to licensing issues.", "5 We build a training set for VerbAtlas using the mapping from PropBank available at http://verbatlas.org .", "jection techniques, and", "ii) it uses the English PropBank for all languages, which goes against our interest in capturing cross-lingual knowledge over heterogeneous inventories.", "Cross-lingual SRL.", "Table 1 compares the results obtained by our unified cross-lingual model against the state of the art in multilingual SRL, including both syntax-agnostic and syntax-aware architectures, on the in-domain test sets of CoNLL-2009 when using gold pre-identified predicates, rather than the predicates identified by the model itself, as standard in the CoNLL-2009 shared task.", "While proposing a state-of-the-art architecture is not the focus of this work, we believed it was important to build our cross-lingual approach starting from a strong and consistent baseline.", "For this reason, Table 1 includes the results obtained when training a separate instance of our model for each language, using the same strategy adopted by current multilingual systems (Cai and Lapata, 2019a; He et al., 2019; Lyu et al., 2019) and showing results that are competitive with He et al. (2019), inter alia .", "Remarkably, thanks to its universal encoders shared across languages and formalisms, our unified cross-lingual model outperforms our state-of-the-art baseline in all the 6 languages at a fraction of the cost in terms of number of trainable parameters (a single cross-lingual model against six monolingual models, each trained on a different language).", "Similar results can be seen in Table 2 where our cross-lingual approach improves over the state of the art on the out-of-domain evaluation of CoNLL-2009, especially in the German and English test sets which were purposely built to include predicates that do not appear in the training set.", "These results confirm empirically our initial hunch that semantic role labeling relations are deeply rooted beyond languages, independently of their surface realization and their predicate-argument structure inventories.", "Finally, for completeness, Appendix E includes the results of our system on the individual subtasks, namely, predicate identification and predicate sense CO NLL-2009 -INDOMAINCA CZ DE EN ES ZH This work XLM-R / monolingual / 10% training 52.7 79.9 60.2 81.7 49.2 72.9 This work XLM-R / cross-lingual / 10% training 78.2 84.0 69.9 84.3 76.1 78.6 This work XLM-R / monolingual / 1-shot learning 44.5 21.8 40.9 67.4 46.5 72.1 This work XLM-R / cross-lingual / 1-shot learning 63.2 28.9 50.1 70.2 62.6 73.6 This work XLM-R / cross-lingual / 1-shot learning / 100% EN 66.4 29.6 55.5 91.6* 64.3 76.7 Table 3: F 1 scores on the in-domain evaluation CoNLL-2009 with gold pre-identified predicates for low-resource (top) and one-shot learning (bottom) scenarios.", "Low-resource cross-lingual SRL.", "We evaluate the robustness of our model in low-resource cross-lingual SRL by artificially reducing the training set of each language to 10% of its original size.", "Table 3 (top) reports the results obtained by our model when trained separately on the reduced training set of each language (monolingual), and the results obtained by the same model when trained on the union of the reduced training sets (cross-lingual).", "The improvements of our cross-lingual approach compared to the more traditional monolingual baseline are evident, especially in lower-resource scenarios, with absolute improvements in F 1 score of 25.5%, 9.7% and 26.9% on the Catalan, German and Spanish test sets, respectively.", "This is thanks to the ability of the model to use the knowledge from a language to improve its performance on other languages.", "One-shot cross-lingual SRL.", "An interesting open question in SRL is whether a system can learn to model the semantic relations between a predicate sense s and its arguments, given a limited number of training samples in which s appears.", "In particular in our case, we are interested in understanding how the model fares in a synthetic scenario where each sense appears at most once in the training set, that is, we evaluate our model in a one-shot learning setting.", "As we can see from Table 3 (bot-tom), our cross-lingual approach outperforms its monolingual counterpart trained on each synthetic dataset separately by a wide margin, once again providing strong absolute improvements 18.7% in Catalan, 9.2% in German and 16.1% in Spanish in terms of F 1 score for languages where the number of training instances is smaller.", "language, depending on how difficult it is to get manual annotations for each language of interest.", "We simulate this setting in SRL by training our model on 100% of the training data available for the English language, while keeping the one-shot learning setting for all the other languages.", "As Table 3 (bottom) shows, non-English languages exhibit further improvements as the number of English training samples increases, lending further credibility to the idea that SRL can be learnt across languages even when using heterogeneous resources.", "Not only do these results suggest that a cross-lingual/cross-resource approach might mitigate the need for a large training set in each language, but also that reasonable cross-lingual results may be obtained by maintaining a single large dataset for a high-resource language, together with several small datasets for low-resource languages.", "Cross-formalism SRL.", "In contrast to existing multilingual systems, a key benefit of our unified cross-lingual model is its ability to provide annotations for predicate senses and semantic roles in any linguistic formalism.", "As we can see from Figure 1 (left), given the English sentence the cat threw its ball out of the window, our language-specific decoders produce predicate sense and semantic role labels not only according to the English PropBank inventory, but also for all the other resources, as it correctly identifies the agentive and patientive constituents independently of the formalism of interest.", "And this is not all, our model may potentially work on any of the 100 languages supported by the underlying language model (m-BERT or XLM-RoBERTa), e.g., in Italian, as shown in Figure 1 (right).", "This is vital for those languages for which a predicate-argument structure inventory has not yet been developed an endeavor that may take Figure 1: Thanks to its universal encoders, our unified cross-lingual model is able to provide predicate sense and semantic role labels according to several linguistic formalisms.", "years to come to fruition and, therefore, manually annotated data are unavailable.", "Thus, as long as a large amount of pretraining data is openly accessible, our system provides a robust cross-lingual tool to compare and analyze different linguistic theories and formalisms across a wide range of languages, on the one hand, and to overcome the issue of performing SRL on languages where no inventory is available, on the other.", "Aligning heterogeneous resources.", "As briefly mentioned previously, the universal encoders in the model architecture force our system to learn cross-lingual features that are important across different formalisms.", "A crucial consequence of this approach is that the model learns to implicitly align the resources it is trained on, without the aid of word aligners and translation tools, even when these resources may be designed around specific languages and, therefore, present significant differences.", "In order to bring to light what our model implicitly learns to align in its shared cross-lingual space (see Sections 3.2 and 3.3), we exploit its language-specific decoders to build a mapping from any source inventory, e.g., AnCora, to a target inventory, e.g., the English PropBank.", "In particular, we use our cross-lingual model to label a training set originally tagged with a source inventory to produce silver annotations according to a target inventory, similarly to what is shown in Figure 1.", "While producing the silver annotations, we keep track of the number of times each predicate sense in the source inventory is associated by the model with a predicate sense of the target inventory.", "As a result, we produce a weighted directed graph in which the nodes are predicate senses and an edge ( a, b ) with weight w indicates that our model maps the source predicate sense a to the target predicate sense b at least w times.", "A portion of this graph is displayed in Figure 2 where, for visualization purposes, we show the most frequent alignments for each language, i.e., the top-3 edges with largest weight from the nodes of each inventory to the nodes of the English PropBank (Figure 2, left) and to the nodes of VerbAtlas (Figure 2, right).", "7 For example, Figure 2 (left) shows that our model learns to map the Spanish AnCora sense em-pezar.c1 and the German PropBank sense starten.2 to the English PropBank sense start.01 , but also that, depending on the context, the Chinese PropBank sense .01 can correspond to both start.01 and begin.01 .", "Figure 2 (right) also shows that our model learns to map senses from different languages and formalisms to the coarse-grained senses of VerbAtlas, even though the latter formalism is quite distant from the others as its frames are based on clustering WordNet synsets sets of synonymous words that share similar semantic behavior, rather than enumerating and defining all the possible senses of a lexeme as in the English and Chinese PropBanks.", "To the best of our knowledge, our unified model is the first transfer-based tool to automatically align diverse linguistic resources across languages without relying on human supervision.", "On one hand, recent research in multilingual SRL has focused mainly on proposing novel model architectures that achieve state-of-the-art results, but require a model instance to be trained on and for each language of interest.", "On the other hand, the latest developments in cross-lingual SRL have revolved around using the English PropBank inventory as a universal resource for other languages through annotation transfer techniques.", "Following our hunch that semantic relations may be deeply rooted beyond the surface realizations that distinguish one language from another, we propose a new approach to cross-lingual SRL and present a model which learns from heterogeneous linguistic resources in order to obtain a deeper understanding of sentence-level semantics.", "To achieve this objective, we equip our model architecture with uni-versal encoders which share their weights across 7 We release the full alignment and the corresponding graph at https://github.com/SapienzaNLP/ unify-srl .", "Our unified cross-lingual model, evaluated on the gold multilingual benchmark of CoNLL-2009, outperforms previous state-of-the-art multilingual systems over 6 diverse languages, ranging from Catalan to Czech, from German to Chinese, and, at the same time, also considerably reduces the amount of trainable parameters required to support different linguistic formalisms.", "And this is not all.", "We find that our approach is robust to low-resource scenarios where the model is able to exploit the complementary knowledge contained in the training set of different languages.", "Most importantly, our model is able to provide predicate sense and semantic role labels according to 7 predicate-argument structure inventories in a single forward pass, facilitating comparisons between different linguistic formalisms and investigations about interlingual phenomena.", "Our analysis shows that, thanks to the prior knowledge encoded in recent pretrained language models and our focus on learning from cross-lingual features, our model can be used on languages that were never seen at training time, opening the door to alignment-free cross-lingual SRL on languages where a predicate-argument structure inventory is not yet available.", "Finally, we show that our model implicitly learns to align heterogeneous resources, providing useful insights into inter-resource relations.", "We leave an in-depth qualitative and quantitative analysis of the learnt inter-resource mappings for future work.", "We hope that our work can set a stepping stone for future developments towards the uni-fication of heterogeneous SRL.", "We release the code to reproduce our experiments and the checkpoints of our best models at https://github.", "com/SapienzaNLP/unify-srl .", "The authors gratefully acknowledge the support of the ERC Consolidator Grant MOUSSE No. 726487 under the European Union's Horizon 2020 research and innovation programme.", "This work was supported in part by the MIUR under grant Dipartimenti di eccellenza 2018-2022 of the Department of Computer Science of Sapienza University." ]
[ "abstain", "method", "abstain", "result", "method", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "abstain", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "objective", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "other", "method", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "other", "abstain", "abstain", "method", "other", "other", "abstain", "method", "other", "abstain", "other", "abstain", "result", "method", "other", "method", "method", "method", "method", "method", "other", "abstain", "other", "abstain", "method", "abstain", "result", "abstain", "result", "result", "result", "result", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "result", "result", "method", "result", "abstain", "result", "result", "abstain", "abstain", "abstain", "objective", "other", "result", "abstain", "result", "method", "result", "result", "abstain", "abstain", "other", "abstain", "other", "other" ]
[ "In online discussion fora, speakers often make arguments for or against something, say birth control, by highlighting certain aspects of the topic.", "In social science, this is referred to as issue framing .", "In this paper, we introduce a new issue frame annotated corpus of online discussions.", "We explore to what extent models trained to detect issue frames in newswire and social media can be transferred to the domain of discussion fora, using a combination of multi-task and adversarial training, assuming only unlabeled training data in the target domain.", "The framing of an issue refers to a choice of perspective, often motivated by an attempt to influ-ence its perception and interpretation (Entman, 1993; Chong and Druckman, 2007).", "The way issues are framed can change the evolution of policy as well as public opinion (Dardis et al., 2008; Iyengar, 1991).", "As an illustration, contrast the statement Illegal workers depress wages with This country is abusing and terrorizing undocumented immigrant workers .", "The first statement puts focus on the economic consequences of immigration, whereas the second one evokes a morality frame by pointing out the inhumane conditions under which immigrants may have to work.", "Being exposed to primarily one of those perspectives might affect the publics attitude towards immigration.", "Computational methods for frame classification have previously been studied in news articles (Card et al., 2015) and social media posts (John-son et al., 2017).", "In this work, we introduce a new benchmark dataset, based on a subset of the 15 generic frames in the Policy Frames Codebook by Boydstun et al. (2014).", "We focus on frame classification in online discussion fora , which have be-Platform: Online discussions Economic Frame, Topic: Same sex marriage But as we have seen, supporting same-sex marriage saves money.", "come crucial platforms for public dialogue on social and political issues.", "Table 1 shows example annotations, compared to previous annotations for news articles and social media.", "Dialogue data is substantially different from news articles and social media, and we therefore explore ways to transfer information from these domains, using multitask and adversarial learning, providing non-trivial baselines for future work in this area.", "Contributions We present a new issue-frame annotated dataset that is used to evaluate issue frame classification in online discussion fora.", "Issue frame classification was previously limited to news and social media.", "As manual annotation is expensive, we explore ways to overcome the lack of labeled training data in the target domain with Frames 1 13 5 6 7 # instances 78 96 234 166 186 Table 2: Class distribution in the online discussion test set.", "multi-task and adversarial learning, leading to improved results in the target domain.", "1 Related Work Previous work on automatic frame classification focused on news articles and social media.", "Card et al. (2016) predict frames in news articles at the document level, using clusters of latent dimensions and word-based features in a logistic regression model.", "Ji and Smith (2017) improve on previous work integrating discourse structure into a recursive neural network.", "Naderi and Hirst (2017) use the same resource, but make predictions at the sentence level, using topic models and recurrent neural networks.", "Johnson et al. (2017) predict frames in social media data at the micro-post level, using probabilistic soft logic based on lists of keywords, as well as temporal similarity and network structure.", "All the work mentioned above uses the generic frames of Boydstun et al. (2014)'s Policy Frames Codebook.", "Baumer et al. (2015) predict words perceived as frame-evoking in political news articles with hand-crafted features.", "Field et al. (2018) analyse how Russian news articles frame the U.S. using a keyword-based cross-lingual projection setup.", "Tsur et al. (2015) use topic models to analyze issue ownership and framing in public statements released by the US congress.", "Besides work on frame classification, there has recently been a lot of work on aspects closely related to framing, such as subjectivity detection (Lin et al., 2011), detection of biased language (Recasens et al., 2013) and stance detection (Mohammad et al., 2016; Augenstein et al., 2016; Ferreira and Vlachos, 2016).", "We create a new resource of issue-frame annotated online fora discussions, by annotating a subset of the Argument Extraction Corpus (Swanson et al., 2015) with a subset of the frames in the Policy Frames Codebook.", "The Argument Extraction 1 Code and annotations are available at https:// github.com/coastalcph/issue_framing .", "Corpus is a collection of argumentative dialogues across topics and platforms.", "2 The corpus contains posts on the following topics: gay marriage , gun control , death penalty and evolution .", "A subset of the corpus was annotated with argument quality scores by Swanson et al. (2015), which we exploit in our multi-task setup (see 3).", "We collect new issue frame annotations for each argument in the argument-quality annotated data.", "3 We refer to this new issue-frame annotated corpus as online discussion corpus henceforth.", "Each argument can have one or multiple frames.", "Following Naderi and Hirst (2017), we focus on the five most frequent issue frames: Economic , constitutionality and jurisprudence , policy prescription and evaluation , law and order/crime and jus-tice , and political .", "See Table 1 for examples and Table 2 for the class distribution in the resulting online discussions test set.", "Phrases which do not match the five categories are labeled as Other , but we do not consider this class in our experiments.", "The annotations were done by a single annotator.", "A second annotator labeled a subset of 200 instances that we use to compute agreement as macro-averaged F-score, assuming one of the annotations as gold standard.", "Results are 0 .", "73 and 0 .", "7 , respectively.", "The averaged Cohen's Kappa is 0 .", "71 .", "The dataset described in the previous section serves as evaluation set for the online discussions domain.", "As we do not have labeled training data for this domain, we exploit additional corpora and additional annotations, which are described in the next subsection.", "Statistics of the filtered datasets as well as preprocessing details are given in Appendix A. Media Frames Corpus The Media Frames Corpus (Card et al., 2015) contains US newspaper articles on three topics: Immigration , smoking and same-sex marriage .", "The articles are annotated with the 15 framing dimensions defined in the Policy Frames Codebook.", "4 The annotations are on 2 The corpus is a combination of dialogues from http: //www.createdebate.com/ , and Walker et al. (2012)'s Internet Argument Corpus, which contains dialogues from 4forums.com .", "span-level and can cross sentence boundaries.", "We convert span annotations to sentence-level annotations as follows: if a span annotated with label l lies within sentence boundaries and covers at least 50% of the tokens in the sentence, we label the sentence with l .", "We only keep sentence annotations if they are indicated by at least two annotators.", "Congressional Tweets Dataset The congressional tweets dataset (Johnson et al., 2017) contains tweets authored by 40 members of the US Congress, annotated with the frames of the Policy Frames Codebook.", "The tweets are related to one or two of the following six issues: abortion , the Affordable Care Act , gun rights vs. gun control , immigration , terrorism , and the LGBTQ community , where each tweet is annotated with one or multiple frames.", "Argument Quality Annotations The corpus of online discussions contains additional annotations that we exploit in the multi-task setup.", "Swanson et al. (2015) sampled a subset of 5,374 sentences, using various filtering methods to increase likelihood of high quality argument occurrence, and collected annotations for argument quality via crowdsourcing.", "Annotators were asked to rate argument quality using a continuous slider [0-1].", "Seven annotations per sentence were collected.", "We convert these annotations into binary labels (1 if 0.5, 0, otherwise) and generate an approximately balanced dataset for a binary classification task that is then used as an auxiliary task in the multi-task setup.", "Balancing is motivated by the observation that balanced datasets tend to be better auxiliary tasks (Bingel and Sgaard, 2017).", "The task we are faced with is (multi-label) sequence classification for online discussions.", "However, we have no labeled training data (and only a small labeled validation set) for the target task in the target domain.", "Hence, we train our model on a dataset which is labeled with the target labels, but from a different domain.", "The largest such dataset is the news articles corpus, which we consequently use as main task.", "Our baseline model is a two-layer LSTM (Hochreiter and Schmidhuber) trained on only the news articles data.", "We then apply two strategies to facilitate the transfer of information from source to target domain, multi-task learning and adversarial learning.", "We briefly describe both setups in the following.", "An overview over tasks and data used in the different models is shown in Table 3.", "Multi-Task Learning To exploit synergies between additional datasets/annotations, we explore a simple multi-task learning with hard parameter sharing strategy, pioneered by Caruana (1993), introduced in the context of NLP by Collobert et al. (2011), and to RNNs by Sgaard and Goldberg (2016), which has been shown to be useful for a variety of NLP tasks, e.g. sequence labelling (Rei, 2017; Ruder et al., 2019; Augenstein and Sgaard, 2017), pairwise sequence classification (Augen-stein et al., 2018) or machine translation (Dong et al., 2015).", "Here, parameters are shared between hidden layers.", "Intuitively, it works by training several networks in parallel, tying a subset of the hidden parameters so that updates in one network affect the parameters of the others.", "By sharing parameters, the networks regularize each other, and the network for one task can benefit from repre-Figure 1: Overview over the multi-task model (left) and the adversarial model (right).", "The baseline LSTM model corresponds to the same architecture with only one task.", "Our multi-task architecture is shown in Figure 1.", "We have N different datasets T 1 , , TN .", "Each dataset T i consists of tuples of sequences x T i XT i and labels y T i YT i .", "A model for task T i consists of an input layer, an LSTM layer (that is shared with all other tasks) and a feed forward layer with a softmax activation as output layer.", "The input layer embeds a sequence x T i using pretrained word embeddings.", "The LSTM layer recurrently processes the embedded sequence and outputs the final hidden state h .", "The output layer outputs a vector of probabilities p T i RYT i , based on which the loss L i is computed as the categorical cross-entropy between prediction p T i and true label y T i .", "In each iteration, we sample a data batch for one of the tasks and update the model parameters using stochastic gradient descent.", "If we sample a batch from the main task or an auxiliary task is decided by a weighted coin flip.", "Adversarial Learning Ganin and Lempitsky (2015) proposed adversarial learning for domain adaptation that can exploit unlabeled data from the target domain.", "The idea is to learn a classifier that is as good as possible at assigning the target labels (learned on the source domain), but as poor as possible in discriminating between instances of the source domain and the target domain.", "With this strategy, the classifier learns representations that contain information about the target class but abstract away from domain-specific features.", "During training, the model alternates between 1) pre-1 5 6 7 13 Figure 2: Improvement in F-score over the random baseline by class.", "dicting the target labels and 2) predicting a binary label discriminating between source and target instances.", "In this second step, the gradient that is backpropagated is flipped by a Gradient-Reversal layer.", "5 Consequently, the model parameters are updated such that the classifier becomes worse at solving the task.", "The architecture is shown in the right part of Figure 1.", "In our implementation, the model samples batches from the adversarial task or the main task based on a weighted coinflip.", "We compare the multi-task learning and the adversarial setup with two baseline models:", "(a) a Random Forest classifier using tf-idf weighted bag-of-words-representations, and", "(b) the LSTM baseline model.", "For the multi-task model, we use both the Twitter dataset and the argument quality dataset as auxiliary tasks.", "For all models, we report results on the test set using the optimal hyperparameters that we found averaged over 3 runs on the validation set.", "For the neural models, we use 100-dimensional GloVe embeddings (Pennington et al., 2014), pre-trained on Wikipedia and Giga-word.", "6 Details about hyper-parameter tuning and optimal settings can be found in Appendix B. Results The results in Table 5 show that both the multi-task and the adversarial model improve over 5 In the forward pass, this layer multiplies its input with the identity matrix.", "the baselines.", "The multi-task model achieves mi-nor improvements over the LSTM baseline, with a bigger improvement in the micro-averaged score, indicating bigger improvements with frequent labels.", "The adversarial model performs best, with an error reduction in micro-averaged F over the LSTM baseline of 5.6%.", "Figure 2 shows the system performances for each class.", "Each bar indicates the difference between the F-score of the respective system and the random baseline.", "The adversarial model achieves the biggest improvements over the baseline for classes 5 and 7, which are the two most frequent classes in the test set (cf. Table 6).", "For classes 1 and 13, the adversarial model is outperformed by the LSTM.", "Furthermore, we see that the hardest frame to predict is the Policy prescription and evaluation frame (6), where the models achieve the lowest improvement over the baseline and the lowest absolute F-score.", "This might be because utterances with this frame tend to address specific policies that vary according to topic and domain of the data, and are thus hard to generalize from source to target domain.", "Analysis Table 4 contains examples of model predictions on the dialogue dev set.", "In Example (1), the adversarial and the multi-task model correctly predict a Constitutionality frame, while the LSTM model incorrectly predicts a Crime and punishment frame.", "In Examples (2) and (3), only the adversarial model predicts the correct frames.", "In both cases, the LSTM model incorrectly predicts an Economic frame, possibly because it is misled by picking up on a different sense of the terms means and risks .", "In Example (4), all models make an incorrect prediction.", "We speculate this might be because the models pick up on the phrase restrictions on handguns and interpret it as referring to a policy, whereas to correctly label the sentence they would have to pick up on the violation of the Second Amendment , indicating a Constitutionality frame.", "This work introduced a new benchmark of political discussions from online fora, annotated with issue frames following the Policy Frames Cookbook.", "Online fora are influential platforms that can have impact on public opinion, but the language used in such fora is very different from newswire and other social media.", "We showed, however, how multi-task and adversarial learning can facilitate transfer learning from such domains, leveraging previously annotated resources to improve predictions on informal, multi-party discussions.", "Our best model obtained a micro-averaged F1-score of 0.548 on our new benchmark.", "We acknowledge the resources provided by CSC in Helsinki through NeIC-NLPL (www.nlpl.eu), and the support of the Carlsberg Foundation and the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research." ]
[ "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "other" ]
[ "Recognizing that even correct translations are not always semantically equivalent, we automatically detect meaning divergences in parallel sentence pairs with a deep neural model of bilingual semantic similarity which can be trained for any parallel corpus without any manual annotation.", "We show that our semantic model detects divergences more accurately than models based on surface features derived from word alignments, and that these divergences matter for neural machine translation.", "Parallel sentence pairs are sentences that are translations of each other, and are therefore often assumed to convey the same meaning in the source and target language.", "Occasional mismatches between source and target have been primarily viewed as alignment noise (Goutte et al., 2012) due to imperfect sentence alignment tools in parallel corpora drawn from translated texts (Tiedemann, 2011; Xu and Yvon, 2016), or the noisy process of extracting parallel segments from non-parallel corpora (Fung and Cheung, 2004; Munteanu and Marcu, 2005).", "In contrast, we view translation as a process that inherently introduces meaning mismatches, so that even correctly aligned sentence pairs are not necessarily semantically equivalent.", "This can happen for many reasons: translation lexical choice often involves selecting between near synonyms that introduce language-specific nuances (Hirst, 1995), typological divergences lead to structural mismatches (Dorr, 1994), differences in discourse organization can make it impossible to find one-to-one sentence alignments (Li et al., 2014).", "Cross-linguistic variations in other discourse phenomena such as coreference, discourse relation and modality (Lapshinova-Koltunski, 2015) compounded with translation effects that distinguish translationese from original text (Koppel and Ordan, 2011) might also lead to meaning mismatches between source and target.", "In this paper, we aim to provide empirical evidence that semantic divergences exist in parallel corpora and matter for downstream applications.", "This requires an automatic method to distinguish semantically equivalent sentence pairs from semantically divergent pairs, so that parallel corpora can be used more judiciously in downstream cross-lingual NLP applications.", "We propose a semantic model to automatically detect whether a sentence pair is semantically divergent (Sec-tion 3).", "While prior work relied on surface cues to detect mis-aligments, our approach focuses on comparing the meaning of words and overlapping text spans using bilingual word embeddings (Lu-ong et al., 2015) and a deep convolutional neural network (He and Lin, 2016).", "Crucially, training this model requires no manual annotation.", "Noisy supervision is obtained automatically borrowing techniques developed for parallel sentence extraction (Munteanu and Marcu, 2005).", "Our model can thus easily be trained to detect semantic divergences in any parallel corpus.", "We extensively evaluate our semantically-motivated models on intrinsic and extrinsic tasks: detection of divergent examples in two parallel English-French data sets (Section 5), and data selection for English-French and Vietnamese-English machine translation (MT) (Section", "6).The semantic models significantly outperform other methods on the intrinsic task, and help select data to train neural machine translation faster with no loss in translation quality.", "Taken together, these results provide empirical evidence that sentence-alignment does not necessarily imply semantic equivalence, and that this distinction matters in practice for a downstream NLP application.", "Translation Divergences We use the term semantic divergences to refer to bilingual sentence pairs, including translations, that do not have the same meaning.", "These differ from typological divergences , which have been defined as structural differences between sentences that convey the same meaning (Dorr, 1994; Habash and Dorr, 2002), and reflect the fact that languages do not encode the same information in the same way.", "Non-Parallel Corpora Mismatches in bilingual sentence pairs have previously been studied to extract parallel segments from non-parallel corpora, and augment MT training data (Fung and Cheung, 2004; Munteanu and Marcu, 2005, 2006; AbduI-Rauf and Schwenk, 2009; Smith et al., 2010; Riesa and Marcu, 2012, inter alia) .", "Methods for parallel sentence extraction rely primarily on surface features based on translation lexicons and word alignment patterns (Munteanu and Marcu, 2005, 2006).", "Similar features have proved to be useful for the related task of translation quality estimation (Spe-cia et al., 2010, 2016), which aims to detect divergences introduced by MT errors, rather than human translation.", "Recently, sentence embeddings have also been used to detect parallelism (Espana-Bonet et al., 2017; Schwenk and Douze, 2017).", "Although embeddings capture semantic generalizations, these models are trained with neural MT objectives, which do not distinguish semantically equivalent segments from divergent parallel segments.", "Cross-Lingual Sentence Semantics Crosslingual semantic textual similarity (Agirre et al., 2016) and cross-lingual textual entailment (Negri and Mehdad, 2010; Negri et al., 2012, 2013) seek to characterize semantic relations between sentences in two different languages beyond translation equivalence, and are therefore directly relevant to our goal.", "While the human judgments obtained for each task differ, they all take inputs of the same form (two segments in different languages) and output a prediction that can be interpreted as indicating whether they are equivalent in meaning or not.", "Models share core intuitions, relying either on MT to transfer the cross-lingual task into its monolingual equivalent (Jimenez et al., 2012; Zhao et al., 2013), or on features derived from MT components such as translation dictionaries and word alignments (Turchi and Negri, 2013; Lo et al., 2016).", "Training requires manually annotated examples, either bilingual, or monolingual when using MT for language transfer.", "Impact of mismatched sentence pairs on MT Prior MT work has focused on noise in sentence alignment rather than semantic divergence.", "Goutte et al. (2012) show that phrase-based systems are remarkably robust to noise in parallel segments.", "When introducing noise by permuting the target side of parallel pairs, as many as 30% of training examples had to be permuted to degrade BLEU significantly.", "While such artificial noise does not necessarily capture naturally occurring divergences, there is evidence that data cleaning to remove real noise can benefit MT in low-resource settings (Matthews et al., 2014).", "Neural MT quality appears to be more sensitive to the nature of training examples than phrase-based systems.", "Chen et al. (2016) suggest that neural MT systems are sensitive to sentence pair permutations in domain adaptation settings.", "Belinkov and Bisk (2018) demonstrate the brittleness of character-level neural MT when exposed to synthetic noise (random permutations of words and characters) as well as natural human errors.", "Concurrently with our work, Hassan et al. (2018) claim that even small amounts of noise can have adverse effects on neural MT models, as they tend to assign high probabilities to rare events.", "They filter out noise and select relevant in-domain examples jointly, using similarities between sentence embeddings obtained from the encoder of a bidirectional neural MT system trained on clean in-domain data.", "In contrast, we detect semantic divergence with dedicated models that require only 5000 parallel examples (see Section 5).", "This work builds on our initial study of semantic divergences (Carpuat et al., 2017), where we provide a framework for evaluating the impact of meaning mismatches in parallel segments on MT via data selection: we show that filtering out the most divergent segments in a training corpus improves translation quality.", "However, we previously detect mismatches using a cross-lingual entailment classifier, which is based on surface features only, and requires manually annotated training examples (Negri et al., 2012, 2013).", "In this paper, we detect divergences using a semantically-motivated model that can be trained given any existing parallel corpus without manual intervention.", "We introduce our approach to detecting divergence in parallel sentences, with the goal of (1) detecting differences ranging from large mismatches to subtle nuances, (2) without manual annotation.", "Cross-Lingual Semantic Similarity Model We address the first requirement using a neural model that compares the meaning of sentences using a range of granularities.", "We repurpose the Very Deep Pairwise Interaction (VDPWI) model, which has been previously been used to detect semantic textual similarity (STS) between English sentence pairs (He and Lin, 2016).", "It achieved competitive performance on data from the STS 2014 shared task (Agirre et al., 2014), and outperformed previous approaches on sentence classification tasks (He et al., 2015; Tai et al., 2015), with fewer parameters, faster training, and without requiring expensive external resources such as WordNet.", "The VDPWI model was designed for comparing the meaning of sentences in the same language, based not only on word-to-word similarity comparisons, but also on comparisons between overlapping spans of the two sentences, as learned by a deep convolutional neural network.", "We adapt the model to our cross-lingual task by initializing it with bilingual embeddings.", "To the best of our knowledge, this is the first time this model has been used for cross-lingual tasks in such a way.", "We give a brief overview of the resulting model here and refer the reader to the original paper for details.", "Given sentences e and f , VDPWI models the semantic similarity between them using a pipeline consisting of five components:", "1. Bilingual word embeddings : Each word in e and f is represented as a vector using pre-trained, bilingual embeddings.", "2. BiLSTM for contextualizing words : Contextualized representations for words in e and f are obtained by choosing the output vectors at each time step obtained by running a bidirectional LSTM (Schuster and Paliwal, 1997) on each sentence.", "3. Word similarity cube : The contextualized representations are used to calculate various similarity scores between each word in e with each word in f .", "Each score thus forms a matrix and all such matrices are stacked to form a similarity cube tensor.", "4. Similarity focus layer : The similarity cube is fed to a similarity focus layer that re-weights the similarities in the cube to focus on highly similar word pairs, by decreasing the weights of pairs which are less similar.", "This output is called the focus cube", ".", "5. Deep convolutional network : The focus cube is treated as an image and passed to a deep neural network, the likes of which have been used to detect patterns in images.", "The network consists of repeating convolution and pooling layers.", "Each repetition consists of a spatial convolutional layer, a Rectified Linear Unit (Nair and Hinton, 2010), and a max pooling layer, followed by fully connected layers, and a softmax to obtain the final output.", "The entire architecture is trained end-to-end to minimize the Kullback-Leibler divergence (Kull-back, 1959) between the output similarity score and gold similarity score.", "Noisy Synthetic Supervision How can we obtain gold similarity scores as supervision for our task?", "We automatically construct examples of semantically divergent and equivalent sentences as follows.", "Since a large number of parallel sentence pairs are semantically equivalent, we use parallel sentences as positive examples.", "Synthetic negative examples are generated following the protocol introduced by Munteanu and Marcu (2005) to distinguish parallel from non-parallel segments.", "Specifically, candidate negative examples are generated starting from the positive examples { ( e i , f i ) i } and taking the Cartesian product of the two sides of the positive examples { ( e i , f j ) i, j s.t. i 6 = j } .", "This candidate set is filtered to ensure that negative examples are not too easy to identify: we only retain pairs that are close to each other in length (a length ratio of at most 1:2), and have enough words (at least half) which have a translation in the other sentence according to a bilingual dictionary derived from automatic word alignments.", "This process yields positive and negative examples that are a noisy source of supervision for our task, as some of the positive examples might not be fully equivalent in meaning.", "However, experiments will show that, in aggregate, they provide a useful signal for the VDPWI model to learn to detect semantic distinctions (Section 5).", "We crowdsource annotations of English-French sentence pairs to construct test beds for evaluating our models, and also to assess how frequent semantic divergences are in parallel corpora.", "Data Selection We draw examples for annotation randomly from two English-French corpora, using a resource-rich and well-studied language pair, and for which bilingual annotators can easily be found.", "The OpenSubtitles corpus contains 33M sentence pairs based on translations of movie subtitles.", "The sentence pairs are expected to not be completely parallel given the many constraints imposed on translations that should fit on a screen and be synchronized with a movie (Tiede-mann, 2007; Lison and Tiedemann, 2016), and the use of more informal registers which might require frequent non-literal translations of figu-rative language.", "The Common Crawl corpus contains sentence-aligned parallel documents automatically mined from the Internet.", "Parallel documents are discovered using e.g., URL containing language code patterns, and sentences are automatically aligned after structural cleaning of HTML.", "The resulting corpus of 3M sentence pairs is noisy, yet extremely useful to improve translation quality for multiple language pairs and domains (Smith et al., 2013).", "Annotation Protocol Divergence annotations are obtained via Crowdflower.", "1 Since this task requires good command of both French and English, we rely on a combination of strategies to obtain good quality annotations, including Crowd-flower's internal worker proficiency ratings, geo-restriction, reference annotations by a bilingual speaker in our lab, and instructions that alternate between the two languages (Agirre et al., 2016).", "Annotators are shown an English-French sentence pair, and asked whether they agree or disagree with the statement the French and English text convey the same information.", "We do not use the term divergent, and let the annotators' intuitions about what constitutes the same take precedence.", "We set up two distinct annotation tasks, one for each corpus, so that workers only see examples sampled from a given corpus in a given job.", "Each example is shown to five distinct annotators.", "Annotation Analysis Forcing an assignment of divergent or equivalent labels by majority vote yields 43.6% divergent examples in OpenSubtitles, and 38.4% in Common Crawl.", "Fleiss' Kappa indicates moderate agreement between annotators (0.41 for OpenSubtitles and 0.49 for Common Crawl).", "This suggests that the annotation protocol can be improved, perhaps by using graded judgments as in Semantic Textual Similarity tasks (Agirre et al., 2016), or for sentence alignment 1 http://crowdflower.com 1506 confidence evaluation (Xu and Yvon, 2016).", "Current annotations are nevertheless useful, and different degrees of agreement reveal nuances in the nature of divergences (Table 1).", "Examples labeled as divergent with high confidence (lowest block of the table) are either unrelated or one language misses significant information that is present in the other.", "Examples labeled divergent with lower confidence contain more subtle differences (e.g., what does it mean in English vs. what are the advantages in French).", "Using the two test sets obtained above, we can evaluate the accuracy of our cross-lingual semantic divergence detector, and compare it against a diverse set of baselines in controlled settings.", "We test our hypothesis that semantic divergences are more than alignment mismatches by comparing the semantic divergence detector with models that capture mis-alignment (Section 5.2) or translation (Section 5.3).", "Then, we compare the deep convolutional architecture of the semantic divergence model, with a much simpler model that directly compares bilingual sentence embeddings (Section 5.4).", "Finally, we compare our model trained on synthetic examples with a supervised classifier used in prior work to predict finer-grained textual entailment categories based on manually created training examples (Section 5.5).", "Except for the entailment classifier which uses external resources, all models are trained on the exact same parallel corpora (OpenSubtitles or CommonCrawl for evaluating on the corresponding test bed.) 5.1 Neural Semantic Divergence Detection Model and Training Settings We use the publicly available implementation of the VDPWI model.", "2 We initialize with 200 dimensional BiVec French-English word embeddings (Luong et al., 2015), trained on the parallel corpus from which our test set is drawn.", "We use the default setting for all other VDPWI parameters.", "The model is trained for 25 epochs and the model that achieves the best Pearson correlation coefficient on the validation set is chosen.", "At test time, VDPWI outputs a score [0 , 1] , where a higher value indicates less divergence.", "We tune a threshold on development 2 https://github.com/castorini/ VDPWI-NN-Torch data to convert the real-valued score to binary predictions.", "Synthetic Data Generation The synthetic training data is constructed using a random sample of 5000 sentences from the training parallel corpus as positive examples.", "We generate negative examples automatically as described in Section 3, and sample a subset to maintain a 1:5 ratio of positive to negative examples.", "3 5.2 Parallel vs. Non-parallel Classifier Are divergences observed in parallel corpora more than alignment errors?", "To answer this question, we reimplement the model proposed by Munteanu and Marcu (2005).", "It discriminates parallel pairs from non-parallel pairs in comparable corpora using a supervised linear classifier with the following features for each sentence pair ( e, f ) : Length features: | f | , | e | , | f | | e | , and | e | | f | Alignment features (for each of e and f ): 4 Count and ratio of unaligned words Top three largest fertilities Longest contiguous unaligned and aligned sequence lengths Dictionary features: 5 fraction of words in e that have a translation in f and vice-versa.", "If divergent examples are nothing more than bad translations, a neural MT system should assign lower scores to divergent segments pairs than to those that are equivalent in meaning.", "We test this empirically using neural MT systems trained for a single epoch, and use the system to score each of the sentence pairs in the test sets.", "We tune a threshold on the development set to convert scores to binary predictions.", "The system architecture and training settings are described in details in the later MT section (Section 6.2).", "Preliminary experiments showed that training for more than one epoch does not help divergence detection.", "Our semantic divergence model introduces a large number of parameters to combine the pairwise word comparisons into a single sentence-level prediction.", "This baseline tests whether a simpler model would suffice.", "We detect semantic divergence by computing the cosine similarity between sentence embeddings in a bilingual space.", "The sentence embeddings are bag-of-word representations, build by taking the mean of bilingual word embeddings for each word in the sentence.", "This approach has been shown to be effective, despite ignoring fundamental linguistic information such as word order and syntax (Mitchell and Lapata, 2010).", "We use the same 200 dimensional BiVec word embeddings (Luong et al., 2015), trained on OpenSubtitles and CommonCrawl respectively.", "Our final baseline replicates our previous system (Carpuat et al., 2017) where we repurposed annotations and models designed for the task of CrossLingual Textual Entailment (CLTE) to detect semantic divergences.", "This baseline also helps us understand how the synthetic training data compares to training examples generated manually, for a related cross-lingual task.", "Using CLTE datasets from SemEval (Negri et al., 2012, 2013), we train a supervised linear classifier that can distinguish sentence pairs that entail each other, from pairs where entailment does not hold in at least one direction.", "The features of the classifier are based on word alignments and sentence lengths.", "6 5.6 Intrinsic Evaluation Results Table 2 shows that the semantic similarity model is most successful at distinguishing equivalent from divergent examples.", "The break down per class shows that both equivalent and divergent examples are better detected.", "The improvement is larger for divergent examples with gains of about 10 points for F-score for the divergent class, when compared to the next-best scores.", "Among the baseline methods, the non-entailment model is the weakest.", "While it uses manually constructed training examples, these examples are drawn from distant domains, and the categories annotated do not exactly match the task 6 As in the prior work, alignments are trained on 2M sentence pairs from Europarl (Koehn, 2005) using the Berkeley aligner (Liang et al., 2006).", "The classifier is the linear SVM from Scikit-Learn.", "at hand.", "In contrast, the other models benefit from training on examples drawn from the same corpus as each test set.", "Next, the machine translation based model and the sentence embedding model, both of which are unsupervised, are significantly weaker than the two supervised models trained on synthetic data, highlighting the benefits of the automatically constructed divergence examples.", "The strength of the semantic similarity model compared to the sentence embeddings model highlights the benefits of the fine-grained representation of bilingual sentence pairs as a similarity cube, as opposed to the bag-of-words sentence embedding representation.", "Finally, despite training on the same noisy synthetic data as the parallel v/s non-parallel system, the semantic similarity model is better able to detect meaning divergences.", "This highlights the benefits of (1) meaning comparison between words in a shared embedding space, over the discrete translation dictionary used by the baseline, and of (2) the deep convolutional neural network which enables the semantic comparison of overlapping spans in sentence pairs, as opposed to more local word alignment features.", "We manually examine the 13-15% of examples in each test set that are correctly detected as divergent by semantic similarity and mis-classified by the non-parallel detector.", "On OpenSubtitles, most of these examples are true divergences rather than noisy alignments (i.e. sentences that are not translation of each other.) The non-parallel detector weighs length features highly, and is fooled by sentence pairs of similar length that share little content and therefore have very sparse word alignments.", "The remaining sentence pairs are plausible translations in some context that still contain inherent divergences, such as details missing or added in one language.", "The non-parallel detector views these pairs as non-divergent since most words can be aligned.", "The semantic similarity model can identify subtle meaning differences, and correctly classify them as divergent.", "As a result, the non-parallel detector has a higher false positive rate (22%) than the semantic similarity classifier (14%), while having similar false negative rates (11% v/s 12%).", "On the CommonCrawl test set, the examples with disagreement are more diverse, ranging from 1508 Divergence Detection OpenSubtitles Common Crawl Approach +P +R +F -P -R -F Overall F +P +R +F -P -R -F Overall F Sentence Embeddings 65 60 62 56 61 58 60 78 58 66 52 74 61 64 MT Scores (1 epoch) 67 53 59 54 68 60 60 54 65 59 17 11 14 42 Non-entailment 58 78 66 53 30 38 54 73 49 58 48 72 57 58 Non-parallel 70 83 76 61 42 50 66 70 83 76 61 42 49 67 Semantic Dissimilarity 76 80 78 75 70 72 77 82 88 85 78 69 73 80 Table 2: Intrinsic evaluation on crowdsourced semantic equivalence vs. divergence testsets.", "Having established the effectiveness of the semantic divergence detector, we now measure the impact of divergences on a downstream task, machine translation.", "As in our prior work (Carpuat et al., 2017), we take a data selection approach, selecting the least divergent examples in a parallel corpus based on a range of divergence detectors, and comparing the translation quality of the resulting neural MT systems.", "English-French We evaluate on 4867 sentences from the Microsoft Spoken Language Translation dataset (Federmann and Lewis, 2016) as well as on 1300 sentences from TED talks (Cettolo et al., 2012), as in past work (Carpuat et al., 2017).", "Training examples are drawn from OpenSubtitles, which contains ~28M examples after deduplication.", "We discard 50% examples for data selection.", "Vietnamese-English Since the SEMANTICSIMILARITY model was designed to be easily portable to new language pairs, we also test its impact on the IWSLT Vietnamese-English TED task, which comes with ~120,000 and 1268 in-domain sentences for training and testing respectively (Farajian et al., 2016).", "This is a more challenging translation task as Vietnamese and English are more distant languages, there is little training data, and the sentence pairs are expected to be cleaner and more parallel than those from OpenSubtitles.", "In these lower resource settings, we discard 10% of examples for data selection.", "We use the attentional encoder-decoder model (Bahdanau et al., 2015) implemented in the SockEye toolkit (Hieber et al., 2017).", "Encoders and decoders are single-layer GRUs (Cho et al., 2014) with 1000 hidden units.", "Source and target word embeddings have size 512.", "Using byte-pair encoding (Sennrich et al., 2016), the vocabulary size is 50000.", "Maximum sequence length is set to 50.", "We optimize the standard cross-entropy loss using Adam (Kingma and Ba, 2014), until validation perplexity does not decrease for 8 checkpoints.", "The learning rate is set to 0.0003 and is halved when the validation perplexity does not decrease for 3 checkpoints.", "The batch size is set to 80.", "At decoding time, we construct a new model by averaging the parameters for the 8 checkpoints with best validation perplexity, and decode with a beam of", "5. All experiments are run thrice with distinct random seeds.", "Note that the baseline in this work is much stronger than in our prior work ( > 5 BLEU).", "This is due to multiple factors that have been recommended as best practices for neural MT and have been incorporated in the present baseline deduplication of the training data, ensemble decoding using multiple random runs, use of Adam as the optimizer instead of AdaDelta (Bahar et al., 2017; Denkowski and Neubig, 2017), and checkpoint averaging (Bahar et al., 2017) as well as a more recent neural modeling toolkit.", "We train English-French neural MT systems by selecting the least divergent half of the training corpus with the following criteria:", "SEMANTICSIMILARITY (Section 3) PARALLEL : the non-parallel sentence detector (Section 5.2) ENTAILMENT : the entailment classifier (Sec-tion 5.5), as in Carpuat et al. (2017) RANDOM : Randomly downsampling the training corpus", "Learning curves (Figure 1) show that data selected using SEMANTICSIMILARITY yields better", "validation BLEU throughout training compared to all other models.", "SEMANTICSIMILARITY selects more useful examples for MT than PARALLEL , even though both selection models are trained on the same synthetic examples.", "This highlights the benefits of semantic modeling over surface mis-alignment features.", "Furthermore, SEMANTICSIMILARITY achieves the final validation BLEU of the model that uses ALL data with only 30% of the updates.", "This suggests that semantically divergent examples are pervasive in the training corpus, confirming the findings from manual annotation (Section 4), and that the presence of such examples slows down neural MT training.", "Decoding results on the blind test sets (Table 3) show that SEMANTICSIMILARITY outperforms all other data selection criteria (with differences being statistically significant (p < 0.05) (Koehn, 2004)), and performs as well or better than the ALL model which has access to twice as many training examples.", "The SEMANTICSIMILARITY model also achieves significantly better translation quality than the ENTAILMENT model used in our prior work.", "Surprisingly, the ENTAILMENT model performs worse than the ALL baseline, unlike in our prior work.", "We attribute this different behavior to several factors: the strength of the new baseline (Section 6.2), the use of Adam instead of AdaDelta, which results in stronger BLEU scores at the beginning of the learning curves for all models, and finally the deduplication of the training data.", "In our prior systems, the training corpus was not deduplicated.", "Data selection had a side-effect of reducing the ratio of duplicated examples.", "When the ENTAILMENT model was used, longer sentence pairs with more balanced length were selected, yielding longer translations with a better BLEU brevity penalty than the baseline.", "With the new systems, these advantages vanish.", "We further analyze output lengths in Section 6.5.", "Trends from English-French carry over to Vietnamese English, as the SEMANTICSIMILARITY model compares favorably to ALL while reducing the number of training updates by 10%.", "SEMAN 1510 TICSIMILARITY also yields better BLEU than RANDOM with the differences being statistically significant.", "While differences in score here are smaller, these result are encouraging since they demonstrate that our semantic divergence models port to more distant low-resource language pairs.", "We break down the results seen in Figure 1 and Table 3, with a focus on the behavior of the ENTAILMENT and ALL models.", "We start by analyzing the BLEU brevity penalty trends observed on the validation set during training (Figure 2).", "We observe that both the ENTAILMENT and SEMANTICSIMILARITY based models have similar brevity penalties despite having performances that are at opposite ends of the spectrum in terms of BLEU.", "This implies that translations generated by the SEMANTICSIMILARITY model have better n-gram overlap with the reference, but are much shorter.", "Manual examination of the translations suggests that the ENTAILMENT model often fails by under-translating sentences, either dropping segments from the beginning or the end of source sentences (Table 5).", "The PARALLEL model consistently produces translations that are longer than the reference.", "7 This is partially due to the model's propensity to generate a sequence of garbage tokens in the beginning of a sentence, especially while translating shorter sentences.", "In our test set, almost 12% of the translated sentences were found to begin with the garbage text shown in Table", "5. Only a small fraction ( < 0 . 02% ) of the French sentences in our training data begin with these tokens, but the tendency of PARALLEL to promote divergent examples above non-divergent ones, seems to exaggerate the generation of this sequence.", "We conducted an extensive empirical study of semantic divergences in parallel corpora.", "Our crowdsourced annotations confirms that correctly aligned sentences are not necessarily meaning equivalent.", "We introduced an approach based on neural semantic similarity that detects such divergences much more accurately than shallower translation or alignment based models.", "Importantly, our model does not require manual annotation, and can be trained for any language pair and domain with a parallel corpus.", "Finally, we show that filtering out divergent examples helps speed up the convergence of neural machine translation training without loss in translation quality, for two language pairs and data conditions.", "New datasets and models introduced in this work are available at http://github.com/ yogarshi/SemDiverge .", "These findings open several avenues for future work: How can we improve divergence detection further?", "Can we characterize the nature of the divergences beyond binary predictions?", "How do divergent examples impact other applications, including cross-lingual NLP applications and semantic models induced from parallel corpora, as well as tools for human translators and second language learners?", "We thank the CLIP lab at the University of Maryland and the anonymous reviewers from NAACL 2018 and WMT 2017 for their constructive feedback.", "This work was supported in part by research awards from Amazon, Google, and the Clare Boothe Luce Foundation." ]
[ "method", "result", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "objective", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "result", "other", "result", "abstain", "abstain", "other", "other" ]
[ "The performance of a Part-of-speech (POS) tagger is highly dependent on the domain of the processed text, and for many domains there is no or only very little training data available.", "This work addresses the problem of POS tagging noisy user-generated text using a neural network.", "We propose an architecture that trains an out-of-domain model on a large newswire corpus, and transfers those weights by using them as a prior for a model trained on the target domain (a data-set of German Tweets) for which there is very little annotations available.", "The neural network has two standard bidirectional LSTMs at its core.", "However, we find it crucial to also encode a set of task-specific features, and to obtain reliable (source-domain and target-domain) word representations.", "Experiments with different regularization techniques such as early stopping, dropout and fine-tuning the domain adaptation prior weights are conducted.", "Our best model uses external weights from the out-of-domain model, as well as feature embeddings, pretrained word and sub-word embeddings and achieves a tagging accuracy of slightly over 90%, improving on the previous state of the art for this task.", "Part-of-speech (POS) tagging is a prerequisite for many applications and necessary for a wide range of tools for computational linguists.", "The state-of-the art method to implement a tagger is to use neural networks (Ma and Hovy, 2016; Yang et al., 2018).", "The performance of a POS tagger is highly dependent on the domain of the processed text and the availability of sufficient training data (Schn-abel and Schutze, 2014).", "Existing POS taggers for canonical German text already achieve very good results around 97% accuracy, e.g. (Schmid, 1999; Plank et al., 2016).", "When applying these trained models to out-of-domain data the performance decreases drastically.", "One of the domains where there is not enough data is online conversational text in platforms such as Twitter, where the very informal language exhibits many phenomena that differ significantly from canonical written language.", "In this work, we propose a neural network that combines a character-based encoder and embeddings of features from previous non-neural approaches (that can be interpreted as an inductive bias to guide the learning task).", "We further show that the performance of this already effective tagger can be improved significantly by incorporating external weights using a mechanism of domain-specific L2-regularization during the training on in-domain data.", "This approach establishes state-of-the-art results of 90.3% accuracy on the German Twitter corpus of Rehbein (2013).", "The first POS tagging approach for German Twitter data was conducted by Rehbein (2013) and reaches an accuracy of 88.8% on the test set using a CRF.", "They use a feature set with eleven different features and an extended version of the STTS (Schiller et al., 1999) as a tagset.", "Gimpel et al. (2011) developed a tagset for English Twitter data and report results of 89.37% on their test set using a CRF with different features as well.", "POS tagging for different languages using a neural architecture was successfully applied by Plank et al. (2016).", "The data comes from the Universal Dependencies project 1 and mainly contains German newspaper texts and Wikipedia articles.", "The work of Barone et al. (2017) investigates different regularization mechanisms in the field of domain adaptation.", "They use the same L2 regularization mechanism for neural machine translation, as we do for POS tagging.", "The Stuttgart-Tubingen-TagSet (STTS, Schiller et al. (1999)) is widely used as the state-of-the-art tagset for POS tagging of German.", "Bartz et al. (2013) show that the STTS is not sufficient when working with textual data from online social platforms, as online texts do not have the same characteristics as formal-style texts, nor are identical to spoken language.", "Online conversational text often contains contracted forms, graphic reproductions of spoken language such as prolongations, interjections and grammatical inconsistencies as well as a high rate of misspellings, omission of words etc.", "For POS tagging we use the tagset of Rehbein (2013), where (following Gimpel et al. (2011)) additional tags are provided to capture peculiarities of the Twitter corpus.", "This tagset provides tags for @-mentions, hashtags and URLs.", "They also provide a tag for non-verbal comments such as *Trommelwirbel* (drum-roll).", "Additional, complex tags for amalgamated word forms were used (see Gimpel et al. (2011)).", "Overall the tagset used in our target domain contains 15 tags more than the original STTS.", "Two corpora with different domains are used in this work.", "One of them is the TIGER corpus and the other is a collection of German Twitter data.", "The texts in the TIGER corpus (Brants et al., 2004) are taken from the Frankfurter Rundschau newspaper and date from 1995 over a period of two weeks.", "The annotation of the corpus was created semi automatically.", "The basis for the annotation of POS tags is the STTS.", "The TIGER corpus is one of the standard corpora for German in NLP and contains 888.505 tokens.", "The Twitter data was collected by Rehbein (2013) within eight months in 2012 and 2013.", "The complete collection includes 12.782.097 distinct tweets, from which 1.426 tweets were randomly selected for manual annotation with POS tags.", "The training set is comparably small and holds 420 tweets, whereas the development and test set hold around 500 tweets each (overall 20.877 tokens).", "Since this is the only available German annotated Twitter corpus, we use it for this work.", "The usage of pretrained word embeddings can be seen as a standard procedure in NLP to improve the results with neural networks (see Ma and Hovy (2016).", "FastText 2 provides pretrained sub-word embeddings for 158 different languages and allows to obtain word vectors for out-of-vocabulary words.", "The pretrained vectors for German are based on Wikipedia articles and data from Common Crawl 3 .", "We obtain 97.988 different embeddings for the tokens in TIGER and the Twitter corpus of which 75.819 were already contained in Common Crawl and 22.171 were inferred from sub-word units.", "Spinningbytes 4 is a platform for different applications in NLP and provides several solutions and resources for research.", "They provide word embeddings for different text types and languages, including Word2Vec (Mikolov et al., 2013) vectors pretrained on 200 million German Tweets.", "Overall 17.030 word embeddings form the Spinningbytes vectors are used (other words are initialized all-zero).", "Lample et al. (2016) show that the usage of a character level encoder is expedient when using bidirectional LSTMs.", "Our implementation of this encoder follows Hiroki Nakayama (2017) 5 , where character embeddings are passed to a bidirectional LSTM and the output is concatenated to the word embeddings.", "This section describes the proposed architecture of the neural network and the conditional random field used in the experiments.", "For comparison of the results we also experiment with jointly training on a merged training set, which contains the Twitter and the TIGER training sets.", "The baseline CRF of Rehbein (2013) achieves an accuracy of 82.49%.", "To be comparable with their work we implement a CRF equivalent to their baseline model.", "Each word in the data is represented by a feature dictionary.", "We use the same features as Rehbein proposed for the classifica-tion of each word.", "These are the lowercased word form, word length, number of uppercase letters, number of digits and occurrence of a hashtag, URL, @-mention or symbol.", "The first layer in the model is an embedding layer.", "The next layers are two bidirectional LSTMs.", "The baseline model uses softmax for each position in the final layer and is optimized using Adam core with a learning rate of 0.001 and the categorical crossentropy as the loss function.", "The non neural CRF model benefits from different features extracted from the data.", "Those features are not explicitely modeled in the neural baseline model, and we apply a feature function for the extended neural network.", "We include the features used in the non-neural CRF for hashtags and @-mentions.", "In addition, we capture orthographic features, e.g., whether a word starts with a digit or an upper case letter.", "Typically, manually de-fined features like these are not used in neural networks, as a neural network should take over feature engineering completely.", "Since this does not work optimally, especially for smaller data sets, we have decided to give the neural network this type of information as well.", "Thus we combine the advantages of classical feature engineering and neural networks.", "This also goes along with the observations of Plank et al. (2018) and Sagot and Martnez Alonso (2017), who both show that adding conventional lexical information improves the performance of a neural POS tagger.", "All words are represented by their features and for each feature type an embedding layer is set up within the neural network in order to learn vectors for the different feature expressions.", "Afterwards all the feature embeddings are added together.", "As the next step we use the character level layer mentioned in section 3.6 (Lample et al., 2016).", "The following vector sequences are concatenated at each position and form the input to the bidirectional LSTMs: Feature embedding vector character-level encoder FastText vectors Word2Vec vectors 4.1.4 Domain Adaptation and regularization We train the model with the optimal setting on the TIGER corpus, i.e., we prepare the TIGER data just like the Twitter data and extract features, include a character level layer and use pretrained embeddings.", "We extract the weights W that were optimized with TIGER.", "The prior weights W are used during optimization as a regularizer for the weights W used in the final model (trained on the Twitter data).", "This is achieved by adding the penalty term RW , as shown in Equation 1, to the objective function (cross-entropy loss).", "The regularization is applied to the weights of the two LSTMs, the character LSTM, to all of the embedding layers and to the output layer.", "As a second regularization mechanism we include dropout for the forward and the backward LSTM layers.", "We also add 1 to the bias of the forget gate at initialization, since this is recommended in Jozefowicz et al. (2015).", "Additionally, we use early stopping.", "Since the usage of different regularization techniques worked well in the experiments of Barone et al. (2017), we also tried the combination of different regularizers in this work.", "Figure 1 shows the final architecture of our model.", "We also report results obtained by training the sequence labelling tagger of Yang and Zhang (2018), NCRF++.", "They showed that their architecture produces state-of-the-art models across a wide range of data sets (Yang et al., 2018) so we used this standardized framework to compare it with our model.", "Table 1 shows the results on the Twitter test set.", "The feature-based baseline CRF outperforms the baseline of the neural net with more than 20 percentage points.", "After adding the feature information, the performance of the neural baseline is improved by 13 percentage points, which is understandable, because many German POS tags are case sensitive.", "The model's performance increases by another 3 percentage points if the character level layer is used.", "Including the pretrained embeddings, FastText and Word2Vec vectors, the accuracy is 84.5%, which outperforms the CRF baseline.", "Figure 2 shows the impact of domain adaptation and fine-tuning the prior weight.", "The value of the parameter in the regularization formula 1 can control the degree of impact of the weights on the training.", "Excluding the pretrained weights means that is 0.", "We observe an optimal benefit from the out-of-domain weights by using a value 10 5 10 4 10 3 10 2 10 1 10 0 0 , 75 0 , 8 0 , 85 0 , 9 results on test set results on development set Figure 2: Influence of fine-tuning on the results on dev and test set in accuracy (y-axis).", "of 0.001.", "This is in line with the observations of Barone et al. (2017) for transfer-learning for machine translation.", "Overall the addition of the L2 fine-tuning can improve the tagging outcome by 5 percentage points, compared to not doing domain adaptation.", "A binomial test shows that this improvement is significant.", "This result confirms the intuition that the tagger can benefit from the pretrained weights.", "On top of fine-tuning different dropout rates were added to both directions of the LSTMs for the character level layer and the joint embeddings.", "A dropout rate of 75% is optimal in our scenario, and it increases the accuracy by 0.7 percentage points.", "The final 90.3% on the test set outperform the results of Rehbein (2013) by 1.5 percentage points.Our best score also outperforms the accuracy obtained with the NCRF++ model.", "This shows that for classifying noisy user-generated text, explicit feature engineering is beneficial, and that the usage of domain adaptation is expedient in this context.", "Joint training, using all data (out-of-domain and target domain), can obtain an accuracy score of 89.4%, which is about 1 percentage point worse than using the same data with domain adaptation.", "The training setup for the joint training is the same as for the other experiments and includes all extensions except for the domain adaptation.", "The most frequent error types in all our systems were nouns, proper nouns, articles, verbs, adjec-Figure", "adjec-Figure 3: Total number of errors for the six most frequent POS-tags and different experimental settings", "tives and adverbs as pictured in figure 3.", "By including the features the number of errors can be reduced drastically for nouns.", "Since we included a feature that captures upper and lower case, and nouns as well as proper nouns are written upper case in German, the model can benefit from that information.", "The pretrained word embeddings also help classifying nouns, articles, verbs, adjectives and adverbs.", "Only the errors with proper nouns increase slightly.", "Compared to only including the features, the model can benefit from adding both, the character level layer and the pretrained word vectors, while the results for tagging proper nouns and articles are still slightly worse than the baseline.", "In contrast the final experimental setup can optimize the results for every POS tag compared to the baseline, see figure 3.", "Slightly in case of articles and proper nouns, but markedly for the other tags.", "A comparison of the baseline errors and the errors of the final system shows that Twitter specific errors, e.g. with @-mentions or URLs, can be reduced drastically.", "Only hashtags still pose a challenge for the tagger.", "In the gold standard words with hashtags are not always tagged as such, but sometimes are classified as proper nouns.", "This is due to the fact that the function of the token in the sentence is the one of a proper noun.", "Thus the tagger has decision problems with these hashtags.", "Other types of errors, such as confusion of articles or nouns, are not Twitter-specific issues, but are often a problem with POS tagging and can only be fixed by general improvement of the tagger.", "domain adaptation and regularization techniques.", "On top of an efficient POS tagger we implemented domain adaptation by using a L2-norm regularization mechanism, which improved the model's performance by 5 percentage points.", "Since this performance is significant we conclude that fine-tuning and domain adaptation techniques can successfully be used to improve the performance when training on a small target-domain corpus.", "Our experiments show that the combination of different regularization techniques is recommendable and can further optimize already efficient systems.", "The advantage of our approach is that we do not need a large annotated target-domain corpus, but only pretrained weights.", "Using a pretrained model as a prior for training on a small amount of data is done within minutes and therefore very practicable in real world scenarios." ]
[ "abstain", "method", "objective", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain" ]
[ "Neural-based summarization models suffer from the length limitation of text encoder.", "Long documents have to been truncated before they are sent to the model, which results in huge loss of summary-relevant contents.", "To address this issue, we propose the sliding selector network with dynamic memory for extractive summarization of long-form documents, which employs a sliding window to extract summary sentences segment by segment.", "Moreover, we adopt memory mechanism to preserve and update the history information dynamically, allowing the semantic flow across different windows.", "Experimental results on two large-scale datasets that consist of scientific papers demonstrate that our model substantially outperforms previous state-of-the-art models.", "Besides, we perform qualitative and quantitative investigations on how our model works and where the performance gain comes from.", "1 1 Introduction Text summarization is an important task of natural language processing which aims to distil salient contents from a textual document.", "Existing summarization models can be roughly classified into two categories, which are abstractive and extractive.", "Abstractive summarization usually adopts natural language generation technology to produce a word-by-word summary.", "In general, these approaches are flexible but may yield disfluent summaries (Liu and Lapata, 2019a).", "By comparison, extractive approaches aim to select a subset of the sentences in the source document, thereby enjoying better fluency and efficiency (Cao et al., 2017).", "Paragraph 1 : Medical tourism is illustrated as occurrence in which individuals travel abroad to receive healthcare services.", "It is a multibillion dollar industry and countries like India, Thailand, Israel, Singapore, Paragraph 2 : The prime driving factors in medical tourism are increased medical costs, increased insurance premiums, increasing number of uninsured or partially insured individuals in developed countries, Paragraph 5 : It is generally presumed in marketing that products with similar characteristics will be equally preferred by the consumers, however, attributes, which make the product similar to other products, will not.", "to achieve desired performance when directly applied in long-form documents, such as scientific papers.", "This inferior performance is partly due to the truncation operation, which inevitably leads to information loss, especially for extractive models because parts of gold sentences would be inaccessible.", "In addition, the accurate modeling of long texts remains a challenge (Frermann and Klemen-tiev, 2019).", "A practical solution for this problem is to use a sliding window to process documents separately.", "This approach is used in other NLP tasks, such as machine reading comprehension (Wang et al., 2019b).", "However, such a paradigm is not suitable for summarization task because the concatenation of summaries that are independently extracted from local contexts is usually inconsistent with the gold summary of the entire document.", "Figure 1 shows an example to illustrate this problem.", "The core topic of the source document is medical tourism , which is discussed in Paragraphs 1 and 2. However, the 5-th paragraph is mainly about consumer and product .", "As a consequence, the paragraph-by-paragraph extraction approach might produce a both repetitive and noisy summary.", "Under this circumstance, the supervised signals will have a negative effect on model behaviors because understanding why Paragraph 5 should output an empty result without information conveying from previous texts is confused for the model.", "In this paper, we propose a novel extractive summarization model for long-form documents.", "We split the input document into multiple windows and encode them with a sliding encoder sequentially.", "During this process, we introduce a memory to preserve salient information learned from previous windows, which is used to complete and enrich local texts.", "Intuitively, our model has the following advantages: 1) In each window, the text encoder processes a relatively short segment, thereby yielding more accurate representations.", "2) The local text representations can capture beyond-window contextual information via the memory module.", "3) The previous selection results are also parameterized in the memory block, allowing the collaboration among summary sentences.", "To sum up, our contributions are threefold.", "(1) We propose a novel extractive summarization model that can summarize documents of arbitrary length without truncation loss.", "Also, it employs the memory mechanism to address context fragmentation.", "To the best of our knowledge, we are the first to propose applying memory networks into extractive text summarization task.", "(2) The proposed framework (i.e., a sliding encoder combined with dynamic memory) provides a general solution for summarizing long documents and can be easily extended to other abstractive and extractive summarization models.", "(3) Our model achieves the state-of-the-art results on two widely used datasets for long document summarization.", "Moreover, we conduct extensive analysis to understand how our model works and where the performance gain comes from.", "Neural Extractive Summarization .", "Neural-networks have become the dominant approach for extractive summarization.", "Existing studies usually formulate this task as sentence labelling (Dong et al., 2018; Nallapati et al., 2016; Zhang et al., 2019) or sentence ranking (Narayan et al., 2018).", "Among them, recurrent neural networks (Cheng and Lapata, 2016; Zhou et al., 2018), Transformer encoder (Wang et al., 2019a), or graph neural networks (Wang and Liu, 2020, Xu et al., 2020, Cui et al., 2020) (Wang et al., 2020; Xu et al., 2019; Cui et al., 2020) have been used to learn sentence representation.", "Recently, pre-trained language model (e.g. BERT (Devlin et al., 2018)) has provided substantial performance gain for extractive summarization.", "Liu and Lapata (2019b) modified standard BERT for document modelling.", "Xu et al. (2019) used a span-BERT to perform span-level summarization.", "Zhong et al. (2020) regarded document summarization as a semantic matching task and used a Siamese-BERT as the matching model.", "However, the valid length of standard BERT is only 512, which means most of them can hardly generalize to long-form documents effectively.", "Long Document Summarization .", "Recent years have seen a surge of interest on long document summarization, especially scientific publications.", "Celikyilmaz et al. (2018) used a multi-agent framework to boost the encoder performance.", "Cohan et al. (2018) proposed a hierarchical network that incorporates the discourse structures into the encoder and decoder.", "Xiao and Carenini (2019) proposed to model the local and global contexts jointly.", "Cui et al. (2020) proposed a hybrid model that employs a neural topic model (NTM) to infer latent topics as a kind of global information.", "Despite their success, these approaches still face the input length limitation and the difficulty in encoding long texts accurately.", "In comparison, our model addresses these problems with a novel segment-wise extraction way and can summarize arbitrarily long documents without any content truncation.", "Memory Networks .", "Memory network (Weston et al., 2015) is a general framework that employs a memory bank to model long-term information.", "Due to its flexible architecture and superior adaptability, it has been applied into various NLP scenarios, such as text classification (Zeng et al., 2018), question answering (Kumar et al., 2016; Xiong et al., 2016), and sentiment analysis (Tang et al., 2016).", "In this study, we leverage a memory module capture beyond-window when performing segment-level summarization.", "To the best of our knowledge, memory networks have never been applied into extractive summarization task.", "This section describes our model, namely, the Sliding Selector Network with Dynamic Memory (SSN-DM), of which Figure 2 gives an overall architecture.", "Formally, given a document D of arbitrary length, we first split D into multiple segments according to the pre-defined window length.", "Then, we use a BERT encoder to sequentially encode each segment and select salient sentences.", "During this process, a memory module is applied to achieve the information flow across different windows.", "Finally, the extracted sentences are aggregated to generate the final summary.", "We elucidate each module in the following subsections.", "Let seg k = s k 1 , s k 2 , . . . , s kn be the kth window consisting of n sentences.", "We encode the window text with a pre-trained BERT, which has been proven effective on extractive summarization task (Liu and Lapata, 2019b; Xu et al., 2019; Cui et al., 2020).", "Following previous studies, we modify the standard BERT by inserting [ CLS ] and [ SEP ] tokens into the beginning and end of each sentence, respectively.", "where w ki,j denotes the jth word of the ith sentence.", "OB = { h k 1 ,CLS , h k 1 , 2 , . . . , h kn,SEP } denotes the representations of each token learned by BERT.", "We regard the hidden states of [ CLS ] tokens H k = { h k 1 ,CLS , h k 2 ,CLS , . . . , h kn,CLS } as the corresponding sentence representations.", "On top of BERT encoder, we add an additional layer to incorporate two types of structural information.", "The first part is the position information of the current window.", "In our segment-wise encoding, the position embeddings equipped in BERT are recalculated in each window, thereby losing the exact position of each token in the entire document.", "This positional bias may lead to inferior performance (Zhong et al., 2019; Dai et al., 2019).", "To address this problem, we assign a window-level position encoding to each window as a complementary feature, indicating its relative position in the document.", "In addition, we further introduce a group of section (e.g., introduction, conclusion) embeddings to capture the discourse information, which has been proved an important feature for scientific papers summarization (Cohan et al., 2018).", "Combining", "where e kw indicates the kth window-level position embedding, and e s the section embedding. Both of them are randomly initialized and learned as a part of the model. Throughout the paper, W represents trainable parameter matrix.", "Noticeably, the section features might not be generally available for long texts of other genres. Therefore, in our experiments, we consider e s as an optional setting and conduct quantitative investigations on Section 5 to probe into its effect on model performance.", "After encoding the window text, we infuse the history information of previous texts into the learned representations H k via a memory module. Let M k R l d m be the memory block in the kth window that preserves salient information of previous k 1 windows, where l represents the number of memory slots and d m represents the dimension of memory vector. M 0 is initialized with fixed values in the first window and then updated in the learning process dynamically. The detail of this part is explained in Section 3.4.", "We use a graph neural network to model the interaction between memory module and the current window. Concretely, we first construct a bipartite", "graph that consists of l memory nodes and n sentence nodes, whose embeddings are initialized with M k and H k , respectively.", "Then, we use graph attention network (GAT; Velickovic et al., 2018) to encode this graph.", "Given a sentence node h i , we update its representation by aggregating its neighboring nodes, as shown as follows, z ki,j = LeaklyRelu ( W a [ h ki ; SG ( m kj )]) , i,j = exp ( z ki,j ) (cid:80) lj =1 exp ( z ki,j ) , h ki = (cid:107) Tt =1 l (cid:88) j =1 tanh ( ti,j W tc SG ( m kj )) , (3) where i,j denotes the attention weight from node h ki to node m kj .", "Multi-head attention is applied to stabilize the calculation process.", "Function SG ( ) stands for stop-gradient operation.", "We refer H k and M k to the sentence representations and memory vectors after graph propagation, respectively.", "During the graph interaction, the sentence representations are completed and enriched by history information and vice versa.", "Empirical observations of prior research (Tang et al., 2016; Zeng et al., 2018) have shown that stacking multiple memory layers can bring further performance gain.", "Similarly, in our model, the multi-hops setting can be achieved by increasing the graph iteration number, i.e., repeating the GAT calculation process (Eq. 3).", "We have obtained the sentence representations H k derived from window text, and its extended version H k enriched by memory information.", "Given ith sentence, we send h ki and h ki into a MLP classifier to compute its summary label.", "where y i represents the predicted probability of ith sentence, and represents the point-wise operation.", "f o is a feed-forward network with three hidden layers.", "We construct interaction features between h ki and h ki to capture the importance of ith sentence in both current segment and history context.", "The training objective of the model is to minimize the binary cross-entropy loss given the predictions and ground truth sentence labels, i.e., L = (cid:80) y i log ( y i ) + (1 y i ) log (1 y i ) After processing the entire document, we rank all the sentences and select top-k as the final summary, where k is a hyperparameter set according to the average length of reference summaries.", "It worth noting that the memory module also acts as an intermediary to make the sentence scores of different windows comparable.", "Now we explain the learning process of memory module.", "Figure 3 presents the information flow of our model.", "In each window, after the prediction layer, we update the memory values with two inputs.", "First, recall that in GAT calculation, the updated memory vectors M k has also encoded the contextual information of the current window during the interaction with H k .", "Therefore, we combine M k and M k with gating mechanism (Chung et al., 2014).", "ki = tanh ( W m m ki ) , u ki = ki m ki + (1 ki ) m ki (5) where u ki is the liner interpolation between history memory m ki and the newly computed mi k .", "ki R d m is an gate vector to modulates the information flow.", "The second part refers to the extraction result of the current window.", "We first aggregate the sentence representations with their predicted probabilities (Eq.4) to parameterize the selected sentences.", "Here, r ksum can be considered a sentence-level coverage vector (See et al., 2017) that records what contents has been extracted from the current window.", "This ensures that the following selection is informed by previous decisions.", "Then, we use a single feedforward layer to generate new memory M k +1 = { m k +11 , . . . , m k +1 l } for next window.", "Our model is particularly designed for long document summarization.", "For this reason, we do not conduct experiments on the widely explored news datasets consisting of relatively short documents.", "For example, the articles in DailyMail (Hermann et al., 2015) dataset have an average of 600 words, which can be effectively processed by most existing models.", "Instead, following prior research on long-form documents summarization(Cohan et al., 2018; Xiao and Carenini, 2019; Cui et al., 2020; Zhong et al., 2020), we evaluate our model on the following two large-scale scientific paper datasets.", "For example, the maximum length of standard BERT is 512, which means that a large proportion 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 0 ~ 250250 ~ 500500 ~ 750 750 ~ 1000 1000 ~ 1500 1500 ~ 2000 2000 ~ 3000 3000 ~ 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% 30.00% 35.00% 0 ~ 250250 ~ 500500 ~ 750 750 ~ 1000 1000 ~ 1500 1500 ~ 2000 2000 ~ 3000 3000 ~ P r opo r ti on P r opo r ti on Position of Gold Sentences Position of Gold Sentences Figure 4: Position distribution of gold sentences on two datasets.", "(colored in grey) of ground-truth sentences would be inaccessible for existing state-of-the-art BERT-based summarization models.", "state-of-the-art summarization approaches.", "Pointer Generator Network (PGN; See et al., 2017) extends the standard seq2seq framework with attention, coverage, and copy mechanism.", "Discourse-Aware (Cohan et al., 2018) is an abstractive model particularly designed for summarizing long-form document with discourse structure.", "It employs a hierarchical encoder and explicitly introduces the section information of scientific papers.", "Seq2seq-local&global (Xiao and Carenini, 2019) is also an extractive model for long document summarization that jointly encodes local and global contexts.", "Match-Sum (Zhong et al., 2020) is a state-of-the-art BERT-based summarization model.", "It performs summary-level extraction based on the matching scores between candidate summary and the source document.", "Topic-GraphSum (Cui et al., 2020) introduces a joint neural topic model to explore latent topics as a kind of global information to help summarize long documents.", "Since Cui et al. (2020) used different data preprocessing, we repeat the experiments using the model released by the authors and preprocess the data in accordance with previous studies (Cohan et al., 2018; Xiao and Carenini, 2019) to make the results comparable.", "For the sliding encoder, we use the bert-base-uncased version with the hidden size of 768 and fine-tune it for all experiments.", "The maximum length of window is set to 512, and we segment the Models arXiv PubMed R-1 R-2 R-L R-1 R-2 R-L Lead 33.66 8.94 22.19 35.63 12.28 25.17 LexRank+ 33.85 10.73 28.99 39.19 13.89 34.59 LSA+ 29.91 7.42 25.67 33.89 9.93 29.70 Oracle* 53.88 23.05 34.90 55.05 27.48 38.66 Seq2seq-attention+ 29.30 6.00 25.56 31.55 8.52 27.38 PGN+ 32.06 9.04 25.16 35.86 10.22 29.69 Disourse-aware+ 35.80 11.05 31.80 38.93 15.37 35.21 Cheng & Lapta (2016)* 42.24 15.97 27.88 43.89 18.53 30.17 SummaRuNNer* 42.81 16.52 28.23 43.89 18.78 30.36 Seq2seq-local&global* 43.62 17.36 29.14 44.85 19.70 31.43 Match-Sum 40.59 12.98 32.64 41.21 14.91 36.75 Topic-GraphSum 44.03 18.52 32.41 45.95 20.81 33.97 SSN-DM 45.03 19.03 32.58 46.73 21.00 34.10 SSN-DM + discourse 44.90 19.06 32.77 46.52 20.94 35.20 Table 2: Rouge results on two dataets.", "documents with sentence as the smallest unit to alleviate semantic fragility.", "For the memory module, we set the number of slots to 50 and the dimension of the memory vector to 768, same with the hidden size of the encoder.", "The iteration number of GAT is set to 2. We use Rouge (Lin, 2004) as the evaluation metric and select the hyperparameters by grid search based on the Rouge-2 performance on validation sets.", "Further analysis about the impacts of hyperparameters are discussed in Section 5.2.", "We train our model with 2 NVIDIA V100 cards with a small batch size of 16.", "During the training, we use Adam (Kingma and Ba, 2015) to optimize parameters with a learning rate of 5e-4.", "An early-stop strategy (Caruana et al., 2000) is applied when valid loss is no longer decent.", "The extracted sentence number is set to 7 for arXiv dataset and 6 for PubMed dataset according to their average summary length.", "We report the average results over 5 runs.", "Table 2 presents the results of different models on two datasets.", "The first section includes traditional approaches and the Oracle; the second and the third sections includes abstractive and extractive models, respectively; and the last section reports ours.", "Our 0.00% 10.00% 20.00% 30.00% 40.00% 1 2 3 4 5 6 7 8 arXiv PubMed Window Index P r o p o r t i o n Figure 5: Proportion of sentences selected by each window.", "model with discourse represents that we leverage section information as additional feature (Eq. 2).", "Several observations deserve to be mentioned.", "Encoding long texts for abstractive summarization is a challenge .", "The vanilla seq2seq with attention model and the pointer network perform rather poorly on the two datasets.", "A possible reason is that most encoders experience difficulties in modeling long-range contextual dependency when encoding long texts (Vaswani et al., 2017; Frermann and Klementiev, 2019), thereby leading to the inferior performance during the generation (decoding) process.", "Global Information Modeling is important for summarizing long documents .", "We also observe that Seq2seq-local&global and Topic-Number of Slots R1 35 40 45 50 arXiv PubMed 20 40 60 80 100 150 200 35 40 45 50 128 192 256 320 384 448 512 arXiv PubMed Window Length R1 Figure 6: Impact of window length (left) and slot number (right) on model performance (R-1).", "GraphSum show promising results on the two datasets.", "Both of them explicitly model the global information (e.g., latent topics).", "Such observation provides a useful instruction for designing the summarization model for long documents.", "Our framework is effective .", "Our two models substantially outperform all the baselines on two datasets.", "Figure 5 shows the proportion of sentences selected by each window, where we can see that our model can extract contents from any position of an entire document.", "By contrast, BERT-Sum and Topic-GraphSum, two BERT-based strong baselines, can only select sentences from the first 512 or 768 words because their truncation setting.", "This superiority endows our model a higher upper bound when summarizing long documents.", "Discourse structure is automatically captured .", "The last section of Table 3 shows that the incorporation of discourse information brings no substantial performance gain for our model, though observations in previous studies (Cohan et al., 2018; Xiao and Carenini, 2019) have shown it an effective feature on arXiv and PubMed datasets.", "A possible reason is that our window-level position encoding has already learned such discourse information because it indicates the window's relative position in the document, while scientific papers are generally organized in specific and relatively fixed structure.", "This observation implies that the performance of our model does not rely on prior information of datasets.", "As a result, our model could be easily generalized to long texts of other genres.", "We conduct experiments to probe into the impact of several important hyperparameters on model performance, including window length, number of memory slots, and number of memory hops (i.e., iteration number of GAT).", "Impact of Window Length .", "Intuitively, a shorter window means more accurate text encoding.", "However, it will result in more segments, which is demanding for memory module.", "Therefore, it is important to find a balanced window length.", "Figure 6 (left) shows that the overall performance is enhanced when the window length increases from a small value (128).", "This is because that too short windows suffer from semantic fragility.", "However, when the window length is set to 368-512, the performance shows a stable trend, implying that the step number and text length are both in a suitable range.", "For the sake of efficiency, we set the window length to 512 in our experiments.", "Impact of Slots Numbers .", "Figure 6 (right) presents the Rouge-1 results on varying slot numbers.", "As can be seen, the curves on the two datasets are not monotonous and show a similar trend.", "In particular, within a particular range where l is relatively small, more slots produce better performance because the memory capacity is improving.", "However, such increasing trend will reach a saturation when slot number exceeds a threshold, which is 60 in our experiments.", "Impact of Iteration Numbers .", "Recall that in memory layer, we employ a GAT to calculate the interaction between the memory and the window texts.", "To select the best iteration number (hop number) t , we compare the performance of different t on the validation sets of two datasets.", "Table 3 shows when t goes from 0 to 2, the performance is slightly boosted.", "However, this increasing trend is not always monotonous, and a larger t does not bring further substantial gain.", "To balance the time cost and performance, we select t =2 for the two datasets.", "of memory module.", "To this end, we construct an ablated version by removing the memory module and then seek to observe the result difference.", "Case Study .", "Figure 7 provides a case study that compares the selection results of the ablated model and our full model.", "In 4-th window, the ablated model selects a repetitive sentence, whereas our full model avoids such error.", "This positive effect is brought by the extraction results preserved in memory module, which serve as a reminder of what information has already been selected.", "We also note that the ablated model selects wrong sentences in 5-th window.", "This is because that the model mistakes the self-esteem as the salient information.", "By contrast, our model, being aware of previous texts, correctly captures the social isolation as the core topic and filters the noisy sentences.", "Quantitative analysis .", "In Figure 8, we compare the Rouge scores between our full model and the ablated one.", "As can be seen, the performance declines dramatically on both datasets when the memory module is removed.", "This proves that the dynamic memory indeed plays a necessary role in our model.", "We further analyze the effecf of memory module in better granularity.", "Intuitively, the memory module should enhance our model in the following aspects: (1) Reducing Redundancy .", "Our mem-45.03 19.03 32.58 39.95 16.46 28.19 0 10 20 30 40 50 R-1 R-2 R-L full model w/o memory ArXiv PubMed 46.73 21 34.1 41.46 18.38 30.05 0 10 20 30 40 50 R-1 R-2 R-L full model w/o memory Figure 8: Rouge results of our full model and the ablated version on the two datasets.", "ory module explicitly records the previous predictions and functions like a sentence-level coverage mechanism, which is expected to reduce repetition.", "(2) Avoiding Noise .", "As discussed in Section 1, segment-wise extraction tend to mistake locally important content as summary sentences due to the lack of global context.", "Our memory module allows the cross-window information flow and therefore should alleviate this problem.", "(3) Perceiving Sentence Length .", "The awareness of previous selections may also allow the model to capture sentence length information (Zhong et al., 2019).", "Ideally, our model is able to adaptively change the change the length of extracted sentence, thereby achieving better performance.", "formance on above aspects.", "Similar to (Zhong et al., 2019), we use S Rep = 1 CountUniq ( ngram ) Count ( ngram ) to measure the degree of repetition, where Count ( ngram ) and CountUniq ( ngram ) are the total and unique number of ngrams of selected sentences.", "For the noise measurement, we have S noise = Count ( NoisySent ) Count ( ExtractSent ) , where NoisySent are the sentences with \"R-1\" smaller than a threshold.", "For the length deviation, we have S Len = ( | sum || ref | ) | ref | , where | sum | and | ref | denote the length of model-produced summary and reference summary, respectively.", "Table 4 presents the comparison results.", "The model achieves better performance in three indicators when combined with memory mechanism, consistent with aforementioned analysis.", "In this study, we propose a novel extractive summarization that can summarize long-form documents without content loss.", "We conduct extensive experiments on two well-studied datasets that consist of scientific papers.", "Experimental results demonstrate that our model outperforms previous state-of-the-art models.", "In the future, we will extend our framework (i.e., a sliding encoder combined with long-range memory modeling) to abstractive summarization models." ]
[ "abstain", "abstain", "objective", "method", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "method", "method", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "result", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "objective", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "abstain", "method", "method", "abstain", "method", "method", "abstain", "other", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective" ]
[ "Pretrained multilingual models (PMMs) enable zero-shot learning via cross-lingual transfer, performing best for languages seen during pretraining.", "While methods exist to improve performance for unseen languages, they have almost exclusively been evaluated using amounts of raw text only available for a small fraction of the world's languages.", "In this paper, we evaluate the performance of existing methods to adapt PMMs to new languages using a resource available for over 1600 languages: the New Testament.", "This is challenging for two reasons: (1) the small corpus size, and (2) the narrow domain.", "While performance drops for all approaches, we surprisingly still see gains of up to 17 .", "69% accuracy for part-of-speech tagging and 6 .", "29 F1 for NER on average over all languages as compared to XLM-R.", "Another unexpected finding is that continued pretraining, the simplest approach, performs best.", "Finally, we perform a case study to disentangle the effects of domain and size and to shed light on the influence of the finetuning source language.", "Pretrained multilingual models (PMMs) are a straightforward way to enable zero-shot learning via cross-lingual transfer, thus eliminating the need for labeled data for the target task and language.", "However, downstream performance is highest for languages that are well represented in the pretraining data or linguistically similar to a well represented language.", "Performance degrades as representation decreases, with languages not seen during pretraining generally having the worst performance.", "In the most extreme case, when a language's script is completely unknown to the model, zero-shot performance is effectively random.", "While multiple methods have been shown to improve the performance of transfer to underrep-Figure 1: The number of space-separated words in the Bible and Wikipedia for six low-resource languages used in our experiments; plotted on a log scale.", "resented languages (cf. Section 2.3), previous work has evaluated them using unlabeled data from sources available for a relatively small number of languages, such as Wikipedia or Common Crawl, which cover 316 1 and 160 2 languages, respectively.", "Due to this low coverage, the languages that would most benefit from these methods are precisely those which do not have the necessary amounts of monolingual data to implement them as-is.", "To enable the use of PMMs for truly low-resource languages, where they can, e.g., assist language documentation or revitalization, it is important to understand how state-of-the-art adaptation methods act in a setting more broadly applicable to many languages.", "In this paper, we ask the following question: Can we use the Bible a resource available for roughly 1600 languages to improve a PMM's zero-shot performance on an unseen target language?", "And, if so, what adaptation method works best?", "We investigate the performance of XLM-R (Conneau 1 https://en.wikipedia.org/wiki/List_ of_Wikipedias 2 https://commoncrawl.github.io/ cc-crawl-statistics/plots/languages et al., 2020) when combined with continued pretraining (Chau et al., 2020), vocabulary extension, (Wang et al., 2020), and adapters (Pfeiffer et al., 2020b) making the following assumptions: (1) the only text available in a target language is the New Testament, and (2) no annotated training data exists in the target language.", "We present results on 2 downstream tasks part-of-speech (POS) tagging and named entity recognition (NER) on a typologically diverse set of 30 languages, all of which are unseen during the pretraining of XLM-R.", "We find that, surprisingly, even though we use a small corpus from a narrow domain, most adaptation approaches improve over XLM-R's base performance, showing that the Bible is a valuable source of data for our purposes.", "We further observe that in our setting the simplest adaptation method, continued pretraining, performs best for both tasks, achieving gains of up to 17.69% accuracy for POS tagging, and 6.29 F1 for NER on average across languages.", "Additionally, we seek to disentangle the effects of two aspects of our experiments on downstream performance: the selection of the source language, and the restricted domain of the New Testament.", "Towards this, we conduct a case study focusing on three languages with Cyrillic script: Bashkir, Chechen, and Chuvash.", "In order to understand the effect of the choice of source language, we use a more similar language, Russian, as our source of labeled data.", "To explore the effect of the New Tes-tament's domain, we conduct our pretraining experiments with an equivalent amount of data sampled from the Wikipedia in each language.", "We find that changing the source language to Russian increases average baseline performance by 18.96 F1, and we achieve the highest results across all settings when using both Wikipedia and Russian data.", "Prior to the introduction of PMMs, cross-lingual transfer was often based on word embeddings (Mikolov et al., 2013).", "Joulin et al. (2018) present monolingual embeddings for 294 languages using Wikipedia, succeeded by Grave et al. (2018) who present embeddings for 157 languages trained on additional data from Common Crawl.", "For cross-lingual transfer, monolingual embeddings can then be aligned using existing parallel resources, or in a completely unsupervised way (Bojanowski et al., Code Language Script Language Family Task ace Acehnese Latin Austronesian NER arz Egyptian Arabic Arabic Afro-Asiatic NER bak Bashkir Cyrillic Turkic NER bam Bambara Latin, N'ko Mande POS ceb Cebuano Latin Austronesian NER che Chechen Cyrillic Northeast Caucasian NER chv Chuvash Cyrillic Turkic NER cop Coptic Coptic Ancient Egyptian POS crh Crimean Turkish Cyrillic Turkic NER glv Manx Latin Indo-European POS grc Ancient Greek Greek Indo-European POS gsw Swiss German Latin Indo-European POS hak Hakka Chinese Chinese Sino-Tibetan NER ibo Igbo Latin Niger-Congo NER ilo Iloko Latin Austronesian NER kin Kinyarwanda Latin Niger-Congo NER mag Magahi Devanagari Indo-Iranian POS mhr Eastern Mari Cyrillic Uralic NER min Minangkabau Latin Austronesian NER mlt Maltese Latin Afro-Asiatic Both mri Maori Latin Austronesian NER myv Erzya Cyrillic Uralic POS nds Low German Latin Indo-European NER ory Odia Odia Indo-Iranian NER sco Scots Latin Indo-European NER tat Tatar Cyrillic Turkic NER tgk Tajik Cyrillic Indo-Iranian NER war Waray Latin Austronesian NER wol Wolof Latin Niger-Congo Both yor Yoruba Latin Niger-Congo Both Table 1: Languages used in our experiments, none of which are represented in XLM-R's pretraining data. 2017; Artetxe et al., 2017; Lample et al., 2017; Conneau et al., 2017; Artetxe et al., 2016).", "Although they use transformer based models, Artetxe et al. (2020) also transfer in a monolingual setting.", "Another method for cross-lingual transfer involves multilingual embeddings, where languages are jointly learned as opposed to being aligned (Ammar et al., 2016; Artetxe and Schwenk, 2019).", "For a more in-depth look at cross-lingual word embeddings, we refer the reader to Ruder et al. (2019).", "While the above works deal with generally improving cross-lingual representations, task-specific cross-lingual systems often show strong performance in a zero-shot setting.", "For POS tagging, in a similar setting to our work, Eskander et al. (2020) achieve strong zero-shot results by using unsupervised projection (Yarowsky et al., 2001) with aligned Bibles.", "Recent work for cross-lingual NER includes Mayhew et al. (2017) who use dictionary translations to create target-language training data, as well as Xie et al. (2018) who use a bilingual dictionary in addition to self-attention.", "Bharad-waj et al. (2016) use phoneme conversion to aid cross-lingual NER in a zero-shot setting.", "More recently, Bari et al. (2020) propose a model only using monolingual data for each language, and Qi et al. (2020) propose a language-agnostic toolkit supporting NER for 66 languages.", "In contrast to these works, we focus on the improvements offered 4557 by adaptation methods for pretrained models for general tasks.", "PMMs can be seen as the natural extension of multilingual embeddings to pretrained transformer-based models.", "mBERT was the first PMM, covering the 104 languages with the largest Wikipedias.", "It uses a 110k byte-pair encoding (BPE) vocabulary (Sennrich et al., 2016) and is pretrained on both a next sentence prediction and a masked language modeling (MLM) objective.", "Languages with smaller Wikipedias are upsampled and highly represented languages are downsampled.", "XLM is a PMM trained on 15 languages.", "XLM similarly trains on Wikipedia data, using a BPE vocabulary with 95k subwords and upand downsamples languages similarly to mBERT.", "XLM also introduces translation language modeling (TLM), a supervised pretraining objective, where tokens are masked as for MLM, but parallel sentences are concatenated such that the model can rely on subwords in both languages for prediction.", "Finally, XLM-R is an improved version of XLM.", "Notable differences include the larger vocabulary of 250k subwords created using SentencePiece tokenization (Kudo and Richardson, 2018) and the training data, which is taken from CommonCrawl and is considerably more than for mBERT and XLM.", "XLM-R relies solely on MLM for pretraining and achieves state-of-the-art results on multiple benchmarks (Conneau et al., 2020).", "We therefore focus solely on XLM-R in our experiments.", "Downstream Performance of PMMs While Pires et al. (2019) and Wu and Dredze (2019) show the strong zero-shot performance of mBERT, Wu and Dredze (2020) shine light on the difference in performance between well and poorly represented languages after finetuning on target-task data.", "Muller et al. (2020) observe varying zero-shot performance of mBERT on different languages not present in its pretraining data.", "They group them into easy' languages, on which mBERT performs well without any modification, medium' languages, on which mBERT performs well after additional pretraining on monolingual data, and hard' languages, on which mBERT's performs poorly even after modification.", "They additionally note the importance of script, finding that transliterating into Latin offers improvements for some languages.", "As transliteration involves language specific tools, we consider it out of scope for this work, and leave further investigation in how to best utilize transliteration for future work.", "Lauscher et al. (2020) focus on PMM finetuning, and find that for unseen languages, gathering labeled data for few-shot learning may be more effective than gathering large amounts of unlabeled data.", "Additionally, Chau et al. (2020), Wang et al. (2020), and Pfeiffer et al. (2020b) present the adaptation methods whose performance we investigate here in a setting where only the Bible is available.", "We give a general overview of these methods in the remainder of this section, before describing their application in our experiments in Section 3.", "Continued Pretraining In a monolingual setting, continued pretraining of a language representation model on an MLM objective has shown to help downstream performance on tasks involving text from a domain distant from the pretraining corpora (Gururangan et al., 2020).", "In a multilingual setting, it has been found that, given a target language, continued pretraining on monolingual data from that language can lead to improvements on downstream tasks (Chau et al., 2020; Muller et al., 2020).", "Vocabulary Extension Many pretrained models make use of a subword vocabulary, which strongly reduces the issue of out-of-vocabulary tokens.", "However, when the pretraining and target-task domains differ, important domain-specific words may be over-fragmented, which reduces performance.", "In the monolingual setting, Zhang et al. (2020) show that extending the vocabulary with in-domain tokens yields performance gains.", "A similar result to that of continued pretraining holds in the multilingual setting: downstream performance of an underrepresented language benefits from additional tokens in the vocabulary, allowing for better representation of that language.", "Wang et al. (2020) find that extending the vocabulary of mBERT with new tokens and training on a monolingual corpus yields improvements for a target language, regardless of whether the language was seen or unseen.", "Chau et al. (2020) have similar results, and introduce tiered vocabulary augmentation, where new embeddings are learned with a higher learning rate.", "While both approaches start with a random initialization, they differ in the amount of new tokens added: Wang et al. (2020) limit new subwords to 4558 30,000, while Chau et al. (2020) set a limit of 99, selecting the subwords which reduce the number of unknown tokens while keeping the subword-to-token ratio similar to the original vocabulary.", "Adapters Adapters are layers with a small number of parameters, injected into models to help transfer learning (Rebuffi et al., 2017).", "Houlsby et al. (2019) demonstrate the effectiveness of task-specific adapters in comparison to standard finetuning.", "Pfeiffer et al. (2020b) present invertible adapters and MAD-X, a framework utilizing them along with language and task adapters for cross-lingual transfer.", "After freezing model weights, invertible and language adapters for each language, including English, are trained together using MLM.", "The English-specific adapters are then used along with a task adapter to learn from labeled English data.", "For zero-shot transfer, the invertible and language adapters are replaced with those trained on the target language, and the model is subsequently evaluated.", "Unlabeled Data We use the Johns Hopkins University Bible Corpus (JHUBC) from McCarthy et al. (2020), which contains 1611 languages, providing verse-aligned translations of both the Old and New Testament.", "However, the New Testament is much more widely translated: 86% of translations do not include the Old Testament.", "We therefore limit our experiments to the New Testament, which accounts to about 8000 verses in total, although specific languages may not have translations of all verses.", "For the 30 languages we consider, this averages to around 402k subword tokens per language.", "The specific versions of the Bible we use are listed in Table 5.", "Labeled Data For NER, we use the splits of Rahimi et al. (2019), which are created from the WikiAnn dataset (Pan et al., 2017).", "For POS tagging, we use data taken from the Universal Dependencies Project (Nivre et al., 2020).", "As XLM-R utilizes a subword vocabulary, we perform sequence labeling by assigning labels to the last subword token of each word.", "For all target languages, we only finetune on labeled data in English.", "languages for which a test set exists for either downstream task and we have a Bible for.", "We then filter these languages by removing those present in the pretraining data of XLM-R.", "See Table 1 for a summary of languages, their attributes, and the downstream task we use them for.", "Our goal is to analyze state-of-the-art PMM adaption approaches in a true low-resource setting where the only raw text data available comes from the New Testament and no labeled data exists at all.", "We now describe our implementation of these methods.", "We focus on the Base version of XLM-R (Conneau et al., 2020) as our baseline PMM.", "Continued Pretraining We consider three models based on continued pretraining.", "In the simplest case, +MLM , we continue training XLM-R with an MLM objective on the available verses of the New Testament.", "Additionally, as Bible translations are a parallel corpus, we also consider a model, +TLM , trained using translation language modeling.", "Finally, following the findings of Lample and Conneau (2019), we also consider a model using both TLM and MLM, + { M|T } LM .", "For this model, we alternate between batches consisting solely of verses from the target Bible and batches consisting of aligned verses of the target-language and source-language Bible.", "For NER, we pretrain +MLM and +TLM models for 40 epochs, and pretrain + { M|T } LM models for 20 epochs.", "For POS tagging, we follow a simlar pattern, training +MLM and +TLM for 80 epochs, and + { M|T } LM for 40 epochs.", "Vocabulary Extension To extend the vocabulary of XLM-R, we implement the process of Wang et al. (2020).", "We denote this as +Extend .", "For each target language, we train a new SentencePiece (Kudo and Richardson, 2018) tokenizer on the Bible of that language with a maximum vocabulary size of 30,000.", "3 To prevent adding duplicates, we filter out any subword already present in the vocabulary of XLM-R.", "We then add additional pieces representing these new subwords into the tokenizer of XLM-R, and increase XLM-R's embedding matrix accordingly using a random initialization.", "Finally, we train the embeddings using MLM on the Bible.", "For NER, we train +Extend models for 40 epochs, and for POS tagging, we train for 80 epochs.", "Adapters For adapters, we largely follow the full MAD-X framework (Pfeiffer et al., 2020b), using language, invertible, and task adapters.", "This is denoted as +Adapters .", "To train task adapters, we download language and invertible adapters for the source language from AdapterHub (Pfeiffer et al., 2020a).", "We train a single task adapter for each task, and use it across all languages.", "We train language and invertible adapters for each target language by training on the target Bible with an MLM objective.", "As before, for NER we train for 40 epochs, and for POS we train for 80 epochs.", "For finetuning, we train using 1 Nvidia V100 32GB GPU, and use an additional GPU for adaptation methods.", "Experiments for NER and POS take around 1 and 2 hours respectively, totalling to 165 total training hours, and 21.38 kgCO 2 eq emitted (Lacoste et al., 2019).", "All experiments are run using the Huggingface Transformers library (Wolf et al., 2020).", "We limit sequence lengths to 256 tokens.", "We select initial hyperparameters for finetuning by using the English POS development set.", "We then fix all hyperparameters other than the number of epochs, which we tune using the 3 languages which have development sets, Ancient Greek, Maltese, and Wolof.", "We do not use early stopping.", "For our final results, we finetune for 5 epochs with a batch size of 32, and a learning rate of 2e-5.", "We use the same hyperparameters for both tasks.", "For each task and adaptation approach, we search over { 10, 20, 40, 80 } epochs, and select the epoch which gives the highest average performance across the development languages.", "We use the same languages as above for POS.", "For NER we use 4 languages with varying baseline performances: Bashkir, Kinyarwanda, Maltese, and Scots.", "We pretrain with a learning rate of 2e-5 and a batch size of 32, except for +Adapters , for which we use a learning rate of 1e-4 (Pfeiffer et al., 2020b).", "We present results for NER and POS tagging in Tables 2 and 3, respectively.", "We additionally provide plots of the methods' performances as compared to the XLM-R baseline in Figures 2 and 3, showing performance trends for each model.", "NER We find that methods based on our most straightforward approach, continued pretraining Lang.", "( +MLM , +TLM , + { M|T } LM ), perform best, with 3.93 to 6.29 F1 improvement over XLM-R.", "Both +Extend and +Adapters obtain a lower average F1 than the XLM-R baseline, which shows that they are not a good choice in our setting: either the size or the domain of the Bible causes them to perform poorly.", "Focusing on the script of the target language (cf. Table 1), the average performance gain across all models is higher for Cyrillic languages than for Latin languages.", "Therefore, in relation to the source language script, performance gain is higher for target languages with a more distant script from the source.", "When considering approaches which introduce new parameters, +Extend and +Adapters , performance only increases for Cyrillic languages and decreases for all others.", "However, when considering continued pretraining approaches, we find a performance increase for all scripts.", "Looking at Figure 2, we see that the lower the baseline F1, the larger the improvement of the adaption methods on downstream performance, with all methods increasing performance for the language for which the baseline is weakest.", "As baseline performance increases, the benefit provided by these methods diminishes, and all methods underperform the baseline for Scots, the language with the highest baseline performance.", "We hypothesize that at 4560 Figure 2: NER results (F1).", "this point the content of the Bible offers little to no extra knowledge for these languages compared to the existing knowledge in the pretraining data.", "POS Tagging Our POS tagging results largely follow the same trend as those for NER, with continued pretraining methods achieving the highest increase in performance: between 15.81 and 17.61 points.", "Also following NER and as shown in Figure 3, the largest performance gain can be seen for languages with a low baseline performance, and, as the latter increases, the benefits obtained from adaptation become smaller.", "However, unlike for NER, all methods show a net increase in performance, with +Adapters , the lowest performing adaptation model, achieving a gain of 9.01 points.", "We hypothesize that a likely reason for this is the domain and style of the Bible.", "While it may be too restrictive to significantly boost downstream NER performance, it is still a linguistically rich resource for POS tagging, a task that is less sensitive to domain in general.", "Additionally, there is a notable outlier language, Coptic, on which no model performs better than random choice (which corresponds to 6% accu-racy).", "This is because the script of this language is almost completely unseen to XLM-R, and practically all non-whitespace subwords map to the unknown token: of the 50% of non-whitespace tokens, 95% are unknown.", "While +Extend solves this issue, we believe that for a language with a completely unseen script the Bible is not enough to learn representations which can be used in a zero-shot setting.", "As previously stated, using the Bible as the corpus for adaptation is limiting in two ways: the extremely restricted domain as well as the small size.", "To separate the effects of these two aspects, we repeat our experiments with a different set of data.", "We sample sentences from the Wikipedia of each target language to simulate a corpus of similar size to the Bible which is not restricted to the Bible's domain or content.", "To further minimize the effect 4561 Figure 3: POS results (Accuracy).", "of domain, we focus solely on NER, such that the domain of the data is precisely that of the target task.", "Additionally, we seek to further investigate the effect on the downstream performance gains of these adaptation methods when the source language is more similar to the target language.", "To this end, we focus our case study on three languages written in Cyrillic: Bashkir, Chechen, and Chuvash.", "We break up the case study into 3 settings, depending on the data used.", "In the first setting, we change the language of our labeled training data from English to Russian.", "While Russian is not necessarily similar to the target languages or mutually intelligible, we consider it to be more similar than English; Russian is written in the same script as the target languages, and there is a greater likelihood for lexical overlap and the existence of loanwords.", "In the second setting, we pretrain using Wikipedia and in the third setting we use both Wikipedia data as well as labeled Russian data.", "To create our Wikipedia training data, we extract sentences with WikiExtractor (Attardi, 2015) and split them with Moses SentenceSplitter (Koehn et al., 2007).", "To create a comparable training set for each language, we first calculate the total number of subword tokens found in the New Testament, and sample sentences from Wikipedia until we have an equivalent amount.", "In the setting where we use data from the New Testament and labeled Russian data, for +TLM and + { M|T } LM we additionally substitute the English Bible with the Russian Bible.", "When using Wikipedia, we omit results for +TLM and + { M|T } LM , as they rely on a parallel corpus.", "We present the results of our case study in Table 4.", "In the sections below, we refer to case study settings as they are described in the table caption.", "Effects of the Finetuning Language We find that using Russian as the source language (the Rus-sian baseline; B-R w/ XLM-R) increases performance over the English baseline (B-E w/ XLM-R) by 18.96 F1.", "Interestingly, all of the adaptation methods utilizing the Bible do poorly in this set-4562 ting (B-R), with +MLM only improving over the Russian baseline by 1.09 F1, and all other methods decreasing performance.", "We hypothesize that when adaptation data is limited in domain, as the source language approaches the target language in similarity, the language adaptation is mainly done in the finetuning step, and any performance gain from the unlabeled data is minimized.", "This is supported by the previous NER results, where we find that, when using English as the source language, the adaptation methods lead to higher average performance gain over the baseline for Cyrillic languages, i.e., the more distant languages, as opposed to Latin languages.", "The adaptation methods show a larger improvement when switching to Wikipedia data (W-R), with +MLM improving performance by 11.85 F1 over the Russian baseline.", "Finally, the performance of +Extend when using Russian labeled data is similar on average regardless of the adaptation data (B-R, W-R), but noticeably improves over the setting which uses Wikipedia and English labeled data.", "Effects of the Domain Used for Adaptation Fixing English as the source language and changing the pretraining domain from the Bible to Wikipedia (W-E) yields strong improvements, with +Adapters improving over the English baseline by 20.9 F1 and +MLM improving by 14.68 F1.", "However, we note that, while the average of +Adapters is higher than that of +MLM , this is due to higher performance on only a single language.", "When compared to the best performing pretraining methods that use the Bible (B-E), these methods improve by 11.29 F1 and 5.30 F1 respectively.", "When using both Wikipedia and Russian data, we see the highest overall performance, and +MLM increases over the English baseline by 30.81 F1 and the Russian baseline by 11.85 F1.", "One limitation of this work and other works which involve a high number of languages is task selection.", "While part-of-speech tagging and named entity recognition 4 are important, they are both low-level tasks largely based on sentence structure, with no requirement for higher levels of reasoning, unlike tasks such as question answering or natural language inference.", "While XTREME (Hu et al., 4 We also note that the WikiANN labels are computer generated and may suffer from lower recall when compared to hand-annotated datasets. 2020) is a great, diverse benchmark covering these higher level tasks, the number of languages is still limited to only 40 languages, all of which have Wikipedia data available.", "Extending these benchmarks to truly low resource languages by introducing datasets for these tasks will further motivate research on these languages, and provide a more comprehensive evaluation for their progress.", "Additionally, while the Bible is currently available in some form for 1611 languages, the available text for certain languages may be different in terms of quantity and quality from the Bible text we use in our experiments.", "Therefore, although we make no language-specific assumptions, our findings may not fully generalize to all 1611 languages due to these factors.", "Furthermore, this work focuses on analyzing the effects of adaptation methods for only a single multilingual transformer model.", "Although we make no model-specific assumptions in our methods, the set of unseen languages differs from model to model.", "Moreover, although we show improvements for the two tasks, we do not claim to have state-of-the-art results.", "In a low-resource setting, the best performance is often achieved through task-specific models.", "Similarly, translation-based approaches, as well as few-shot learning may offer additional benefits over a zero-shot setting.", "We also do not perform an extensive analysis of the target languages, or an analysis of the selected source language for finetuning.", "A better linguistic understanding of the languages in question would allow for a better selection of source language, as well as the ability to leverage linguistic features potentially leading to better results.", "Finally, by using a PMM, we inherit all of that model's biases.", "The biases captured by word embeddings are well known, and recent work has shown that contextual models are not free of biases either (Caliskan et al., 2017; Kurita et al., 2019).", "The use of the Bible, and religious texts in general, may further introduce additional biases.", "Last, we acknowledge the environmental impact from the training of models on the scale of XLM-R (Strubell et al., 2019).", "In this work, we evaluate the performance of continued pretraining, vocabulary extension, and adapters for unseen languages of XLM-R in a realistic low-resource setting.", "Using only the New Testament, we show that continued pretraining is the best per-4563 forming adaptation approach, leading to gains of 6.29 F1 on NER and 17.69% accuracy on POS tagging.", "We therefore conclude that the Bible can be a valuable resource for adapting PMMs to unseen languages, especially when no other data exists.", "Furthermore, we conduct a case study on three languages written in Cyrillic script.", "Changing the source language to one more similar to the target language reduces the effect of adaptation, but the performance of the adaptation methods relative to each other is preserved.", "Changing the domain of the adaptation data to one more similar to the target task while keeping its size constant improves performance.", "We would like to thank the ACL reviewers for their constructive and insightful feedback as well as Yoshinari Fujinuma, Stephane Aroca-Ouellette, and other members of the CU Boulder's NALA Group for their advice and help." ]
[ "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "other", "result", "objective", "result", "abstain", "method", "method", "objective", "result", "other", "other", "method", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "method", "other", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "abstain", "method", "abstain", "abstain", "other" ]
[ "Dialogue policy optimization often obtains feedback until task completion in task-oriented dialogue systems.", "This is insufficient for training intermediate dialogue turns since supervision signals (or rewards ) are only provided at the end of dialogues.", "To address this issue, reward learning has been introduced to learn from state-action pairs of an optimal policy to provide turn-by-turn rewards.", "This approach requires complete state-action annotations of human-to-human dialogues (i.e., expert demonstrations), which is labor intensive.", "To overcome this limitation, we propose a novel reward learning approach for semi-supervised policy learning.", "The proposed approach learns a dynamics model as the reward function which models dialogue progress (i.e., state-action sequences) based on expert demonstrations, either with or without annotations.", "The dynamics model computes rewards by predicting whether the dialogue progress is consistent with expert demonstrations.", "We further propose to learn action embeddings for a better generalization of the reward function.", "The proposed approach outperforms competitive policy learning baselines on MultiWOZ, a benchmark multi-domain dataset.", "Task-oriented dialogue systems complete tasks for users, such as making a restaurant reservation or finding attractions to visit, in multi-turn dialogues (Gao et al., 2018; Sun et al., 2016, 2017).", "Dialogue policy is a critical component in both the conventional pipeline approach (Young et al., 2013) and recent end-to-end approaches (Zhao et al., 2019).", "It decides the next action that a dialogue system should take at each turn.", "Considering its nature of sequential decision making, dialogue policy is usually learned via reinforcement learning (Su et al., Rui Zhang is the corresponding author. Table 1: State Action Annotation and Utterance Example UserSide Utterance I would like moderate price range please. Dialogue State annotation Restaurant: { food=modern european, price range=moderate } SystemSide Utterance I found de luca cucina and riverside brasserie. does either of them sound good for you? System action annotation restaurant-inform: { name=de luca cucina, name=riverside brasserie } 2015; Peng et al., 2018; Zhang et al., 2019).", "Specifically, dialogue policy is learned by maximizing accumulated rewards over interactions with an environment (i.e., actual users or a user simulator).", "Handcrafted rewards are commonly used for policy learning in earlier work (Peng et al., 2018), which assigns a small negative penalty at each turn and a large positive/negative reward when the task is successful/failed.", "However, such reward setting does not provide sufficient supervision signals in each turn other than the last turn, which causes the sparse reward issues and may result in poorly learned policies (Takanobu et al., 2019).", "To address this problem, reward function learning that relies on expert demonstrations has been introduced (Takanobu et al., 2019; Li et al., 2019b).", "Specifically, state-action sequences generated by an optimal policy (i.e., expert demonstrations) are collected, and a reward function is learned to give high rewards to state-action pairs that better resemble the behaviors of the optimal policy.", "In this way, turn-by-turn rewards estimated by the reward function can be provided to learn dialogue policy.", "Obtaining expert demonstrations is critical to reward function learning.", "Since it is impractical to assume that an optimal policy is always available, a common and reasonable approach is to treat the decision makings in human-human dialogues as optimal behaviors.", "To accommodate the learning of reward function, human-human dialogues need to be annotated in the form of state-action pairs from textual utterances.", "Table 1 illustrates an example of human-human dialogue and its state-action annotation.", "However, obtaining such annotations require extensive efforts and costs.", "Besides, a reward function based on state-action pair might cause an unstable policy learning, especially with a limited amount of annotated dialogues (Yang et al., 2018).", "To address the above issues, we propose to learn dialogue policies in a semi-supervised setting where the system action of expert demonstrations only need to be partially annotated.", "We propose to use an implicitly trained stochastic dynamics model as the reward function to replace the conventional reward function that is restricted to state-action pairs.", "Dynamics models describe sequential progress using a combination of stochastic and deterministic states in a latent space, which promotes an effective tracking and forecasting (Minderer et al., 2019; Sun et al., 2019; Wang et al., 2019a).", "In our scenario, we train the dynamics model to describe dialogue progress of expert demonstrations.", "The main rationale is that the reward function should give high rewards to actions that lead to dialogue progress similar to those in expert demonstrations.", "This is because dialogue progress at the early stage highly influences subsequent progress, and the latter directly determines whether the task can be completed.", "Since the learning of dynamics model maps observations to latent states and further reason over the latent states, we are no longer restricted to fully annotated dialogues.", "Using dynamics model as reward function also promotes a more stable policy learning.", "Learning the dynamics model in the text space is, however, prone to compounding errors due to complexities and diversities of languages.", "We tackle this challenge by learning the dynamics model in an action embedding space that encodes the effect of system utterances on dialogue progress.", "We achieve action embedding learning by incorporating an embedding function into a generative models framework for semi-supervised learning (Kingma et al., 2014).", "We observe that system utterances with comparable effects on dialogue progress will lead to similar state transitions (Huang et al., 2019a).", "Therefore, we formulate the generative model to describe the state transition process.", "Using the generative model, we enrich the expert dialogues (either fully or partially annotated) with action embedding to learn the dynamics model.", "Moreover, we also consider the scenarios where both state and action annotations are absent in most expert dialogues, referred to as unlabeled dialogues.", "To expand the proposed approach to such scenarios, we further propose to model dialogue progress using action sequences and reformulate the generative model accordingly.", "Our contributions are summarized as follows: To the best of our knowledge, we are the first to approach semi-supervised dialogue policy learning.", "We propose a novel reward estimation approach to dialogue policy learning which relives the requirements of extensive annotations and promotes a stable learning of dialogue policy.", "We propose an action embedding learning technique to effectively train the reward estimator from either partially labeled or unlabeled dialogues.", "We conduct extensive experiments on the benchmark multi-domain dataset.", "Results show that our approach consistently outperforms strong baselines coupled with semi-supervised learning techniques.", "For task-oriented dialogues, a dialogue policy ( a | s ) decides an action a A based on the dialogue state s S at each turn, where A and S are the predefined sets of all actions and states, respectively.", "Reinforcement learning is commonly applied to dialogue policy learning, where the dialogue policy model is trained to maximize accumulative rewards through interactions with environments (i.e., users): L = E i [ r ( )] = E i [ (cid:88) t r ( s t , a t )] (1) where i = { ( s t , a t ) | 0 t n } represents a sampled dialogue, and r ( i ) is the numerical rewards obtained in this dialogue.", "Instead of determining r ( i ) via heuristics, recent reward learning approaches train a reward function r to assign numerical rewards for each state-action pair.", "The reward function is learned from expert demonstrations D demo that are dialogues sampled from an optimal policy in the form of state-action pairs.", "Adversarial learning is usually adopted to enforces higher rewards to state-action pairs from expert demonstrations and lower rewards to those sam-Figure 1: Overall framework of the proposed approach pled from the learning policy (Fu et al., 2017): L = E j D demo [ r ( j )]+log E i (exp r ( i ) q ( i ) ) (2) where is the current dialogue policy, and q is the distribution of dialogues generated with .", "In this way, the dialogue policy and reward function are iteratively optimized, which requires great training efforts and might lead to unstable learning results (Yang et al., 2018).", "Moreover, such a reward learning approach requires a complete dialogue state and system action annotation of expert demonstrations, which are expensive to obtain.", "We study the problem of semi-supervised dialogue policy learning.", "Specifically, we consider the setting that expert demonstrations D demo consist of a small number of fully labeled dialogues DF and partially labeled dialogues DP .", "For each fully annotated dialogue i in DF , complete annotations are available: i = { ( s t , a t , u t ) | 1 t n } , where u t is the system utterance at turn t .", "Meanwhile, each partially labeled dialogue j in DP only has state annotations and system utterances: j = { ( s t , u t ) | 1 t n } .", "Figure 1 illustrates the overall framework of the proposed approach.", "Rewards are estimated by a dynamics model that consumes action embeddings e ( a t ) .", "Every action in the set A is mapped to a fix-length embedding via a learnable embedding function f E .", "To obtain the action embeddings for DP which has no action annotations, we first predict the action via a prediction model f A and then transform the predicted actions to embeddings.", "To obtain effective action embeddings, we design a state-transition based objective to jointly optimize f E and f A via variational inference (Sec. 3.2).", "After obtaining the action embeddings, the dynamics model is learned by fitting the expert demonstrations enriched by action embeddings.", "Rewards are then estimated as the conditional probability of the action given the current dialogue progress encoded in latent states (Sec. 3.3).", "We also extend the above approach to unlabeled dialogues where both state and action annotations are absent (Sec. 3.4).", "We aim to learn the prediction model f A and action embeddings using both DF and DP .", "We formulate the action prediction model as f A ( a | u t , s t , s t +1 ) which takes as input the system utterance u t and its corresponding state transition ( s t , s t +1 ) .", "We then introduce an mapping function: f E : A E , where E R d is the action embedding space later used for learning the dynamics model.", "We train the prediction model by proposing a variational inference approach based on a semi-supervised variational autoencoder (Semi-VAE) (Kingma et al., 2014).", "Semi-VAE describes the data generation process of feature-label pairs { ( x i , y i ) | 1 i N } via latent variables z as: log p ( x ) = log (cid:88) y (cid:90) z p ( x, z, y ) dz (3) where p is a generative model parameterised by , and the class label y is treated as a latent variables for unlabeled data.", "Since this log-likelihood in Eqn.", "3 is intractable, its variational lower bound for unlabeled data is instead optimized as: log p ( x ) E q , ( y,z | x ) [log p ( x, z, y ) q , ( y, z | x )] = E q ( y | x ) [ L ( x, y )] H ( q ( y | x )) = U ( x ) (4) where q ( z | x, y ) and q ( y | x ) are inference models for latent variable z and y respectively, which have a factorised form q , ( y, z | x ) = q ( z | x, y ) q ( y | x ) ; H ( ) denotes causal entropy; L ( x, y ) is the variational bound for labeled data, ans is formulated as: L ( x, y ) = E q ( z | x,y ) [ p ( x | z, y )] + log p ( y ) KL ( q ( z | x, y ) || p ( z )) (5) where KL is the Kullback-Leibler divergence, and p ( y ) , p ( z ) are the prior distribution of y , z .", "The generative model p , inference model q and q are optimized using both the labeled subset p l and unlabeled subset p u using the objective as: L = (cid:88) ( x,y ) p l L ( x, y ) + (cid:88) x p u U ( x ) (6) Semi-Supervised Action Prediction We now describe the learning of action prediction model f A using semi-supervised expert demonstrations.", "We extend the semi-supervised VAE by modeling the generation process of state transitions .", "State transition information is indicative for action prediction and is available in both fully and partially labeled demonstrations.", "Thus we choose to describe the generation process of state transitions, and the optimization objective is formulated as: log p ( s t +1 , s t ) = log (cid:88) a (cid:90) p ( s t +1 , z, s t , a ) dz = log (cid:88) a (cid:90) p ( s t +1 , s t | , z, a ) p ( z ) p ( a ) dz (7) For partially labeled dialogues, we treat action labels as latent variables and use the action prediction model f A ( a | u t , s t , s t +1 ) to infer the value (which is denoted as f A ( a | ) later for simplicity).", "The variational bound of Eqn.", "7 is derived as: U ( s t +1 , s t ) = E f A ( a | ) [ L ( s t +1 , s t , a )] H ( f A ( a | )) (8) where L ( s t +1 , s t , a t ) is the variational bound for demonstrations with action labels and is derived as: L ( s t +1 , s t , a ) = E q ( z | u t ,a ) [ p ( s t +1 | s t , z )] KL ( q ( z | u t , a ) || p ( z )) (9) where q ( z | u t , a ) is the inference model for latent variable z .", "Lastly, we use fully annotated samples to form a classification loss: L cls = E i DF [log f A ( a | u t , s t , s t +1 )] (10) The overall objectives includes the loss of fully and partially labeled demonstrations: L act = (cid:88) i DFL ( s t +1 , s t , a )+ (cid:88) i DPU ( s t +1 , s t ) + L cls (11) Action Embeddings Learning We then incorporate action embedding function f E into the developed semi-supervised action prediction approach.", "The reason to introduce action embeddings is to make the learning of reward estimator more efficient and robust.", "Specifically, prediction error of the action prediction model might impinge the learning of reward estimator, especially for our semi-supervised scenarios where fully labeled dialogues are limited.", "By mapping actions to an embedding space, wrongly predicted' partially labeled demonstrations can still provide sufficient knowledge and thus we could achieve better generalization over actions for reward estimation.", "To this aim, we consider the inference steps in the semi-supervised learning process and utilize the ones that involve action labels, i.e., the inference models for latent variables z and a .", "We first specify how the action prediction model is modified to include action embeddings.", "Inspired by (Chandak et al., 2019), we model the action selection using Boltzmann distribution for stability during training: f A ( a | u t , s t , s t +1 ) = e z a / (cid:80) a (cid:48) A e z a (cid:48) / z a = e ( a ) (cid:62) g ( u t , s t , s t +1 ) , e ( a ) = f E ( a ) (12) where is a temperature parameter, and g ( ) is a function that maps the input into hidden states of the same dimension as action embeddings.", "We also modify the inference model for latent variable by incorporating action embeddings: q ( z | u t , a ) = q ( z | u t , e ( a )) (13) After optimizing the action prediction model f A and action embedding function f E jointly using the objective function Eqn.", "11, we use action embeddings to enrich the expert demonstrations.", "For fully labeled dialogues, we map the given system action labels to corresponding embeddings and obtain i = { ( s t , e ( a t )) | 1 t n } .", "For partially labeled dialogues, we first infer the action using prediction model: a t = f A ( u t , s t , s t +1 ) , and map the inferred action to its embedding to obtain: j = { ( s t , e ( a t )) | 1 t n } .", "We aim to learn a reward estimator based on action representations obtained from the action learning module.", "To achieve a more stable reward estimation than adversarial reward learning, we propose a reward estimator based on dialogues progress .", "Dialogue progress describes how user goals are achieved through multistep interactions and can be modeled as dialogue state transitions.", "We argue that an action should be given higher rewards when it leads to similar dialogue progress (i.e., state transitions) of expert demonstrations.", "To this aim, we learn a model to explicitly model dialogue progress without the negative sampling required by adversarial learning, and rewards can be estimated as the local-probabilities assigned to the taken actions.", "To model dialogue progress, we use variational recurrent neural network (VRNN) (Chung et al., 2015).", "The reason to use a stochastic dynamics model is due to the one-to-many' nature of task-oriented dialogues.", "Specifically, both user and dialogue system have multiple feasible options to proceed the dialogues which requires the modeling of uncertainty.", "Thus, by adding latent random variables to an RNN architecture, VRNN can provide better modeling of dialogue progress than deterministic dialogue state tracking.", "VRNN has three types of variables: the observations (and here we consider action embeddings), the stochastic state z , and the deterministic hidden state h , which summarizes previous stochastic states z t , and previous observations a t .", "We formulate the prior stochastic states to be conditioned on previous timesteps through hidden state h t 1 : p ( z t | a <t , z <t ) = prior ( h t 1 ) (14) We obtain posterior stochastic states by incorporating the observation at the current step, i,e. action embeddings e ( a t ) : q ( z t | a t , z <t ) = enc ( h t 1 , e ( a t )) (15) Predictions are made by decoding latent states, including both the stochastic and deterministic: p ( e ( a t ) | z t , a <t ) = dec ( z t , h t 1 , s t ) (16) And lastly the deterministic states are updated as: h t = rnn ( e ( a t ) , z t , h t 1 , s t ) (17) where are all implemented as neural networks.", "Note that we also make the prediction and recurrence step to condition on the dialogue state s t to provide more information.", "We train the VRNN by optimizing the evidence lower bound (ELBO) as: LVRNN = E q ( z t | a t ,z <t ) (cid:2) (cid:88) t log p ( e ( a t ) | z t , a <t ) KL ( q ( z t | a t , z <t ) || p ( z t | a <t , z <t )) (cid:3) (18) The rewards are estimated as the conditional probability given the hidden state of VRNN, which encodes the current dialogue progress: r ( s t , a t ) = log p dec ( a t | a <t , s t ) (19) where p dec is the probability given to the selected action based on the decoding step of VRNN (Eqn. 16).", "The larger this conditional probability is, the more similar the dialogue progress this action leads to imitates the expert demonstrations.", "The proposed reward estimation is agnostic to the choice of policy, and various approaches (e.g., Deep Q-learning, Actor-Critic) can be optimized by plugging into the policy learning objective (Eqn. 1).", "We further describe how to expand the proposed model, including action learning and reward estimation modules, to utilize unlabeled expert demonstrations .", "Formally, we consider the setting that we have fully labeled dialogues DF and unlabeled dialogues DU .", "For each dialogue in DU , only textual conversations are provided and neither of state and action labels are available: j = { ( c t , u t ) | 1 t n } , where c t is the context and consists of the dialogue history of both user and system utterances.", "With the absence of dialogue state information, we formulate the action prediction model as f A ( a | u t , u t 1 , u t +1 ) .", "This formulation can be considered as an application of Skip-Thought (Kiros et al., 2015), which originally utilizes contextual sentences as supervision signals.", "In our scenarios, we instead utilize the previous and next system utterances to provide more indicative information for action prediction.", "We also build the joint learning of action prediction model the action embeddings on semi-supervised VAE framework.", "Instead of modeling state transitions, we choose the process of response generation to fully utilize unlabeled dialogues: log p ( u t ) = log (cid:88) a (cid:90) p ( u t , z, a ) dz = log (cid:88) a (cid:90) p ( u t | z, a t ) p ( z ) p ( a ) dz (20) System action labels are treated as latent variables for unlabeled dialogues, and the variational bond is derived as: U ( u t ) = E f A ( a | ) [ L ( u t , a )] H ( f A ( a | )) (21) where L ( u t , a ) is variational bound for fully labeled dialogues: L ( u t , a ) = E q ( z | u t ,a ) [ p ( u t | z, u t 1 , u t +1 )] KL ( q ( z | a, u t ) || p ( z )) (22) The objective to jointly train the prediction model and action embeddings is the same as Eqn.", "11, where the terms for fully and partially labeled dialogues are replaced with the ones in Eqn.", "22 and 21, respectively.", "Such expanding also enables a sufficient semi-supervised learning when expert demonstrations include all types of labeled dialogues: DF , DP and DU .", "We notice that the posterior approximation q ( z | u t , a ) and action embedding function f E can be sharing between the process of state transitions and response generation.", "Thus, by treating semi-supervised learning in DF and DP as auxiliary constraints, the learning over unlabeled corpus can also benefit from dialogues state information.", "To show the effectiveness of the proposed model (denoted as Act-VRNN ), we experiment on a multi-domain dialogue environment under semi-supervised setting (Sec. 4.1).", "We compare against state-of-the-art approaches, and their variants enhanced by semi-supervised learning techniques (Sec. 4.2).", "We analyze the effectiveness of action learning and reward estimation of Act-VRNN under different supervision ratios (Sec. 4.3).", "We use MultiWOZ (Budzianowski et al., 2018), a multi-domain human-human conversational dataset in our experiments.", "It contains in total 8438 dialogues spanning over seven domains, and each dialogue has 13.7 turns on average.", "MultiWOZ also contains a larger dialogue state and action space compared to former datasets such as movie-ticket booking dialogues (Li et al., 2017), and thus it is a much more challenging environment for policy learning.", "To use MultiWOZ for policy learning, a user simulator that initializes a user goal at the beginning and interacts with dialogue policy is required.", "For a fair comparison, we adopt the same procedure as Takanobu et al. (2019) to train the user simulator based on auxiliary user action annotations provided by ConvLab (Lee et al., 2019).", "To simulate semi-supervised policy learning, we remove system action and dialogue states annotations to obtain partially labeled and unlabeled expert demonstrations, respectively.", "Fully labeled expert demonstrations are randomly sampled from all training dialogues with different ratios (5%, 10%, and 15% in our experiments).", "Note that the absence of action or state annotations only applies for expert demonstrations, while interactions between policy and user simulator are in dialogue-act level as (Takanobu et al., 2019) and not affected by semi-supervised setting.", "We use a three-layer transformer (Vaswani et al., 2017) with a hidden size of 128 and 4 heads as our base model for action embedding learning, i.e., g ( ) in Eqn.", "12.", "We use grid search to find the best hyperparameters for the models.", "We choose the action embedding dimensionality among { 50, 75, 100, 150, 200 } , the stochastic latent state size in VRNN among { 16, 32, 64, 128, 256 } , and the deterministic latent state size among { 25, 50, 75, 100, 150 } .", "We use Entity-F1 and Success Rate to evaluate dialogue task completion.", "Entity-F1 computes the F1 score based on whether the requested information and indicated constraints from users are satis-fied.", "Compared to inform rate and match rate used by Budzianowski et al. (2018), Entity-F1 considers both informed and requested entities at the same time and balances the recall and precision.", "Success rate indicates the ratio of successful dialogues, where a dialogue is regarded as successful only if all informed and requested entities are matched of the dialogue.", "We use Turns to evaluate the cost for task completion, where a lower number indicates the policy performs tasks more efficiently.", "We compare Act-VRNN with three policy learning baselines: (1) PPO (Schulman et al., 2017) using hand-crafted rewards setting; (2) ALDM (Liu and Lane, 2018); (3) GDPL (Takanobu et al., 2019); We further consider using semi-supervised techniques to enhance the baselines under semi-supervised setting, and denote them as SS-PPO , SS-ALDM , and SS-GDPL .", "Specifically, we first train a prediction model based on semi-supervised VAE (Kingma et al., 2014), and use the predicTable 2: Semi-Supervised Policy Learning Results ( DF and DP ) DF (5%) + DP (95%) DF (10%) + DP (90%) DF (20%) + DP (80%) MODEL Entity-F1 Success Turns Entity-F1 Success Turns Entity-F1 Success Turns Handcrafted PPO 41.8 34.1 13.3 45.3 36.7 12.5 50.6 41.2 11.2 RewardLearning ALDM 38.7 35.6 15.2 42.1 38.6 14.9 44.9 42.1 13.7 GDPL 49.5 47.5 12.8 54.9 53.2 12.1 60.4 59.1 10.8 Semi-VAEEnhanced SS-PPO 45.2 36.2 13.6 47.4 37.2 12.4 53.1 43.6 11.5 SS-ALDM 39.6 38.8 14.7 44.7 43.8 13.2 47.8 51.3 12.4 SS-GDPL 53.7 51.2 11.1 61.3 58.4 10.5 66.5 68.7 9.2 Proposed SS-VRNN 68.7 63.2 9.4 75.1 68.5 8.6 77.3 72.4 8.2 Act-GDPL 70.6 65.6 9.5 78.8 71.1 8.4 80.9 78.0 8.2 Act-VRNN 76.2 72.7 9.1 83.0 81.8 8.0 85.5 86.7 7.9 tion results as action annotations for expert demonstrations.", "1 We also compare the full model Act-VRNN with its two variants: (1) SS-VRNN uses a VRNN that consumes predicted action labels instead of action embeddings; (2) Act-GDPL feeds expert demonstrations enriched by action embeddings to the same reward function as GDPL 4.2 Overall Results Table 2 shows that our proposed model consistently outperforms other models in the setting that uses fully and partially annotated dialogues ( DF and DP ).", "Act-VRNN improves task completion (measured by Entity-F1 and Success) while requiring less cost (measured by Turns).", "For example, Act-VRNN (81.8) outperforms SS-GDPL (60.4) by 35.4% under Success when having 10% fully annotated dialogues, and requires the fewest turns.", "Meanwhile, we find that both action learning and dynamics model are essential to the superiority of Act-VRNN.", "For example, Act-VRNN achieves 19.8% and 11.2% improvements over SS-VRNN and Act-GDPL, respectively, under Success when having 20% fully annotated dialogues.", "This validates that the learned action embeddings well capture similarities among actions, and VRNN is able to exploit such similarities for reward estimation.", "We further find that the improvements brought by semi-VAE enhancement is limited for baselines, especially when the ratio of fully annotated dialogues is low.", "For example, SS-PPO and SS-GDPL achieve 6% and 7% improvements over their counterparts under Success when having 5% fully annotated dialogues.", "Similar results are also observed for pseudo-label approach.", "In general, the pseudo-1 We also experimented with the pseudo-label approach (Lee, 2013), and the empirical results were worse than Semi-VAE.", "Thus, we only report the Semi-VAE enhancement results in the table for simplicity.", "label methods are outperformed by the counterparts of Semi-VAE and are even worse than the baselines without enhancement when the ratio of fully annotated dialogues is low.", "For example, in setting DF + DP , pseudo-label enhanced PPO performs worse than PPO under Entity-F1 when the ratio of fully annotated dialogues is 5% and 10% (37.2 vs 41.8, 39.2 vs 45.3), and only achieves slightly gain when the ratio is 20% (51.0 vs 50.6).", "This is largely because the prediction accuracy of Semi-VAE and pseudo-label approach might be low with a small amount of fully annotated dialogues, and the expert dialogues with mispredicted actions impinge reward function learning of baselines.", "Act-VRNN overcomes this challenge with the generalization ability brought by modeling dialogue progress in an action embedding space for reward estimation.", "The results for policy learning using unlabeled dialogues ( DU ) are shown on Table 3.", "We consider two settings: (1) having fully labeled and unlabeled dialogues, i.e., DF + DU ; (2) having all three types of dialogues , i.e., DF + DP + DU .", "We can see that Act-VRNN significantly outperforms the baselines in both settings.", "For example, in setting DF + DU , Act-VRNN outperforms SS-GDPL by 43% and 44% under Entity-F1 and Success, respectively.", "Similar results are also observed in setting DF + DP + DU .", "We further find that SS-VRNN outperforms Act-GDPL in these two settings while the results are opposite in setting DF + DP , and we will conduct a detailed discussion in the following section.", "By comparing results of Act-VRNN and baselines in these two settings, we can see that Act-VRNN can better exploit the additional partially labeled dialogues.", "For example, SS-GDPL only achieves 2.3% under Success while Act-VRNN achieves more than 5%.", "We compare Act-VRNN with SS-VRNN, and their counterparts that do not use state transition based objective in semi-supervised learning (i.e., optimizing Eqn. 3 instead of Eqn. 7).", "These two variants are denoted as Act-VRNN (no state) and SS-VRNN (no state).", "For a thorough investigation, under each setting, we further show the performances under dialogues spanning over different number of domains.", "Dialogues spanning over more domains are considered more difficult.", "The results under two supervision ratio setting are shown in Fig.", "2(a) and Fig.", "2(b).", "We can see that Act-VRNN outperforms other variants in each con-figuration, especially in the dialogues that include more than one domains.", "This is largely because the learned action embeddings effectively discover the similarities between actions across domains, and thus lead to better generalization of reward estimation.", "We further find that the state transition based objective we formulated fits well with the VRNN based reward estimator.", "Both Act-VRNN and SS-VRNN optimized considering state transitions achieve performance gains.", "Last, we study the effects of dynamics model based reward function in Act-VRNN.", "We consider four different models as reward function: (1) our full dynamics model VRNN; (2) a dynamics model having only deterministic states (Eqn. 17); (3) a dynamics model having only stochastic states (Eqn. 15); (4) GDPL.", "All four models are learned based 1 2 3 Numberofdomainsinthedialogue 40 50 60 70 80 S u cce ss r a t e SS-VRNN(nostate) Act-VRNN(nostate) SS-VRNN Act-VRNN", "on action embedding learned in the action learning module.", "The results under DF + DP and DF + DU are shown in Fig.", "3(a) and Fig.", "3(b), respectively.", "We can see that both stochastic and deterministic states in VRNN are important, since VRNN outperforms its two variants and GDPL in each configuration.", "We further find that the contribution of stochastic and deterministic states may vary in different setting.", "For example, VRNN (stochastic only) consistently outperforms VRNN (determin-istic only) in DF + DU while opposite results are observed in DF + DP when ratio of DF is over 20%.", "This is largely because modeling dialogue progress using stochastic states can provide more stable with less supervision signals, while the incorporation of deterministic can lead to more precise estimation can when more information of expert demonstrations are available.", "Reward learning aims to provide more effective and sufficient supervision signals for dialogue policy.", "Early studies focus on learning reward function utilizing external evaluations, e.g., user experience feedbacks (Gasic et al., 2013), objective ratings (Su et al., 2015; Ultes et al., 2017), or a combination of multiple evaluations (Su et al., 2016; Chen et al., 2019).", "These approaches often assume a human-in-the-loop setting where interactions with real users are available during training, which is expensive and difficult to scale.", "As more large-scale high-quality dialogue corpus become available (e.g., MultiWOZ (Budzianowski et al., 2018)), recent years have seen a growing interest in learning reward function from expert demonstrations.", "Most recent approaches apply inverse reinforcement learning techniques for dialogue policy learning (Takanobu et al., 2019; Li et al., 2019b).", "These all require a complete state-action annotation for expert demonstrations.", "We aim to overcome this limitation in this study.", "Semi-supervised learning aims to utilize unlabeled data to boost model performance, and is studied in computer vision (Iscen et al., 2019), item ranking (Park and Chang, 2019; Huang et al., 2019b), and multi-label classification (Miyato et al., 2015; Wang et al., 2018, 2019b).", "Many studies apply semi-supervised VAE (Kingma et al., 2014) for different classification tasks, e.g., sentiment analysis (Xu et al., 2017; Li et al., 2019a), text matching (Shen et al., 2018; Choi et al., 2019).", "While these work focus on prediction accuracies, we aim to enrich expert demonstrations via semi-supervised learning.", "We study the problem of semi-supervised policy learning and propose Act-VRNN to provide more effective and stable rewards estimations.", "We formulate a generative model to jointly infer action labels and learn action embeddings.", "We design a novel reward function to first model dialogue progress, and estimate action rewards by determining whether the action leads to similar progress as expert dialogues.", "The experimental results confirm that Act-VRNN achieves better task completion compared with the state-of-the-art in two settings that consider partially labeled or unlabeled dialogues.", "For future work, we will explore the scenarios that annotations are absent for all expert dialogues.", "We would like to thank Xiaojie Wang for his help.", "This work is supported by Australian Research Council (ARC) Discovery Project DP180102050, and China Scholarship Council (CSC)." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "result", "method", "method", "method", "objective", "objective", "objective", "objective", "method", "result", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "objective", "objective", "method", "objective", "abstain", "objective", "other", "other" ]
[ "Abstract", "Prepositions are among the most frequent words in English and play complex roles in the syntax and semantics of sentences.", "Not surprisingly, they pose well-known difficulties in automatic processing of sentences (prepo-sitional attachment ambiguities and idiosyncratic uses in phrases).", "Existing methods on preposition representation treat prepositions no different from content words (e.g., word2vec and GloVe).", "In addition, recent studies aiming at solving prepositional attachment and preposition selection problems depend heavily on external linguistic resources and use dataset-specific word representations.", "In this paper we use word-triple counts (one of the triples being a preposition) to capture a preposition's interaction with its attachment and complement.", "We then derive preposition embeddings via tensor decomposition on a large unlabeled corpus.", "We reveal a new geometry involving Hadamard products and empirically demonstrate its utility in paraphrasing phrasal verbs.", "Furthermore, our preposition embeddings are used as simple features in two challenging downstream tasks: preposition selection and prepositional attachment disambiguation.", "We achieve results comparable to or better than the state-of-the-art on multiple standardized datasets.", "Prepositions are a linguistically closed class comprising some of the most frequent words; they play an important role in the English language since they encode rich syntactic and semantic information.", "Many preposition-related tasks are challenging in computational linguistics because of their polysemous nature and flexible usage patterns.", "An accurate understanding and representation of prepositions' linguistic role is key to several important NLP tasks such as grammatical error correction and prepositional phrase attachment.", "A first-order approach is to represent prepositions as real-valued vectors via word embeddings such as word2vec (Mikolov et al., 2013) and GloVe (Pen-nington et al., 2014).", "Word embeddings have brought a renaissance in NLP research; they have been very successful in capturing word similarities as well as analogies (both syntactic and semantic) and are now mainstream in nearly all downstream NLP tasks (such as question-answering (Chen et al., 2017)).", "Despite this success, available literature does not highlight any specific properties of word embeddings of prepositions.", "Indeed, many of the common prepositions have very similar vector representations as shown in Table 1 for preposition vectors trained using word2vec and GloVe (Ten-sor embedding is our proposed representation for prepositions).", "While this suggests that using available representations for prepositions diminishes the distinguishing aspect between prepositions, one could hypothesize that this is primarily because standard word embedding algorithms treat prepositions no different from other content words such as verbs and nouns, i.e., embeddings are created based on co-occurrences with other words.", "However, prepositions are very frequent and co-occur with nearly all words, which means that their co-occurrence ought to be treated differently.", "Modern descriptive linguistic theory proposes", "to understand a preposition via its interactions with both the head it attaches to (termed head ) and its complement (Huddleston, 1984; DeCar-rico, 2000).", "This theory naturally suggests that one should count co-occurrences of a given preposition with pairs of neighboring words.", "One way of achieving this would be by considering a tensor of triples ( word 1 , word 2 , preposition), where we do not restrict word 1 and word 2 to be the head and complement words; instead we model a preposition's interaction with all pairs of neighboring words via a slice of a tensor X , where the slice is populated by word co-occurrences restricted to a context window of the specific preposition.", "Thus, the tensor dimension is N N K where N is the vocabulary size and K is the number of prepositions; since K 50 , we note that N (cid:29) K .", "Using such a representation, we notice that the resulting tensor is low rank and use it to extract embeddings for both preposition and non-preposition words.", "In doing so, we use a combination of standard ideas from word representations (such as weighted spectral decomposition as in GloVe (Pennington et al., 2014)) and tensor decompositions (alternating least squares (ALS) methods (Sharan and Valiant, 2017)).", "We find that the preposition embeddings extracted in this manner are discriminative (see the preposition similarity of the tensor embedding in Table 1).", "Note that the smaller the cosine similarity is, the more distinct the representations are from each other.", "We demonstrate that the resulting preposition representation captures the core linguistic properties of prepositionsthe attachment and the complement properties.", "Using both intrinsic evaluations and downstream tasks, we show this by providing new state-of-the-art results on well-known NLP tasks involving prepositions.", "Intrinsic evaluations : We show that the Hadamard product of the embeddings of a verb and a preposition that together make a phrasal verb, closely approximates the representation of this phrasal verb's paraphrase as a single verb.", "Example: v made (cid:12) v from v produced (cid:12) v , where (cid:12) represents the Hadamard product (i.e., element-wise multiplication) of two vectors and v is a constant vector (not associated with a specific word and is defined later); this approximation validates that prepositional semantics are appropriately encoded into their trained embeddings.", "We provide a mathematical interpretation for this new geometry while empirically demonstrating the paraphrasing of compositional phrasal verbs.", "Extrinsic evaluations : Our preposition embeddings are used as features for a simple classifier in two well-known challenging downstream NLP classification tasks.", "In both tasks, we perform as well as or strictly better than the state-of-the-art on multiple standardized datasets.", "Preposition selection : While the context in which a preposition occurs governs the choice of the preposition, the specific preposition by itself significantly influences the semantics of the context in which it occurs.", "Furthermore, the choice of the right preposition for a given context can be very subtle.", "This idiosyncratic behavior of prepositions is the reason behind preposition errors being one of the most frequent error types made by second language English speakers (Leacock et al., 2010)).", "We demonstrate the utility of the preposition embeddings in the preposition selection task, which is to choose the correct preposition to a given sentence.", "We show this for a large set of contexts 7 , 000 combined instances from the CoNLL-2013 and the SE datasets (Prokofyev et al., 2014).", "Our approach achieves 6% and 2% absolute improvement over the previous state-of-the-art results on the respective datasets.", "Prepositional phrase attachment disambiguation : Prepositional phrase attachment is a common cause of structural ambiguity in natural language.", "In the sentence Pierre Vinken joined the board as a voting member , the prepositional phrase as a voting member can attach to either joined (the VP) or the board (the NP); in this case the VP attachment is correct.", "Despite being extensively studied over decades, prepositional attachment continues to be a major source of syntactic parsing errors (Brill and Resnik, 1994; Kum-merfeld et al., 2012; de Kok and Hinrichs, 2016).", "We use our prepositional representations as simple features to a standard classifier on this task.", "Our approach tested on a widely studied standard dataset (Belinkov et al., 2015) achieves 89% accuracy and compares favorably with the state-of-the art.", "It is noteworthy that while the state-of-the-art results are obtained with significant linguistic resources, including syntactic parsers and the WordNet, our approach achieves a comparable performance without relying on such resources.", "word counts are previously shown to capture much of the benefits of the unlabeled sentence-data; example: (Sharan and Valiant, 2017) reports that their word representations via word-triple counts are better than others, but still significantly worse than regular word2vec representations.", "One of our main observations is that considering word-triple counts makes most (linguistic) sense when one of the words is a preposition.", "Furthermore, the sparsity of the corresponding tensor is no worse than the sparsity of the regular word co-occurrence matrix (since prepositions are so frequent and co-occur with essentially every word).", "Taken together, these two points strongly suggest the benefits of tensor representations in the context for prepositions.", "(2) The word and preposition representations via tensor decomposition are simple features leading to a standard classifier.", "In particular, we do not use dependency parsing (which many prior methods have relied on) or handcrafted features (Prokofyev et al., 2014) or train task-specific representations on the annotated training dataset (Belinkov et al., 2015).", "The simplicity of our approach, combined with the strong empirical results, lends credence to the strength of the prepositional representations found via tensor decompositions.", "We begin with a description of how the tensor with triples (word, word, preposition) is formed and empirically show that its slices are low-rank.", "Next, we derive low dimensional vector representations for words and prepositions via appropriate tensor decomposition methods.", "Tensor creation : Suppose that K prepositions are in the preposition set P = { p 1 , . . . , p K } ; here K is 49 in our preposition selection task, and 76 in the attachment disambiguation task.", "We limited the number of prepositions to what was needed in the dataset.", "The vocabulary, the set of all words excluding the prepositions, contains N words, V = { w 1 , . . . , w N } , and N 1 M .", "We generate a third order tensor XN N ( K +1) from the WikiCorpus (Al-Rfou et al., 2013) as follows.", "We say two words co-occur if they appear within a distance t of each other in a sentence.", "For k K , the entry X ijk is the number of occurrences where word w i co-occurs with preposition p k , and w j also co-occurs with preposition p k in the same sentence, and this is counted across all sentences in Figure 1: Decaying normalized singular values of slices.", "the WikiCorpus.", "For 0 k K , X [: , : , k ] is a matrix of the count of the word pairs that co-occur with the preposition k , and we call such a matrix a slice.", "Here we use a window of size t = 3 .", "While prepositions co-occur with many words, there are also a number of other words which do not occur in the context of any preposition.", "In order to make the maximal use of the data, we add an extra slice X [: , : , K + 1] , where the entry X ij ( K +1) is the number of occurrences where w i co-occurs with w j (within distance 2 t = 6 ) but at least one of them is not within a distance of t of any preposition.", "Note that the preposition window of 3 is smaller than the word window of 6 , since it is known that the interaction between prepositions and neighboring words usually weakens more sharply with distance when compared to that of content words (Hassani and Lee, 2017).", "Empirical properties of X : We find that the tensor X is very sparse only 1% of the tensor elements are non-zero.", "Furthermore, log(1 + X [: , : , k ]) is low-rank (here the logarithm is applied component-wise to every entry of the tensor slice).", "Towards seeing this, we choose slices corresponding to the prepositions about, before,for, in and of, and plot their normalized singular values in Figure", "1. We see that the singular values decay dramatically, suggesting the low-rank structure in each slice.", "Tensor decomposition : We combine standard ideas from word embedding algorithms and tensor decomposition algorithms to arrive at the low-rank approximation to the tensor log(1 + X ) .", "In 898 particular, we consider two separate methods:", "1. Alternating Least Squares (ALS) .", "A generic method to decompose a tensor into its modes is via the CANDECOMP/PARAFAC (CP) decomposition (Kolda and Bader, 2009).", "The tensor log(1 + X ) is decomposed into three modes: U d N , W d N and Q d ( K +1) , based on the solutions to the optimization problem (1).", "Here u i , w i and q i are the i -th column of U , W and Q , respectively.", "L = min U , W , Q NX i =1 NX j =1 K +1 X k =1 ( h u i , w j , q k i log(1 + X ijk ) ) 2 , (1) where h a , b , c i = 1 t ( a (cid:12) b (cid:12) c ) is the inner product of three vectors a , b and c .", "Here 1 is the column vector of all ones and (cid:12) refers to the Hadamard product.", "We can interpret the columns of U as the word representations and the columns of Q as the preposition representations, each of dimension d (equal to 200 in this paper).", "There are several algorithmic solutions to this optimization problem in the literature, most of which are based on alternating least squares methods (Kolda and Bader, 2009; Comon et al., 2009; Anandkumar et al., 2014) and we employ a recent one named Orth-ALS (Sharan and Valiant, 2017) in this paper.", "Orth-ALS periodically orthogonalizes the decomposed components while fixing two modes and updating the remaining one.", "It is supported by theoretical guarantees and empirically outperforms standard ALS methods in different applications.", "2. Weighted Decomposition (WD) : Based on ideas from the literature on word embedding algorithms, we also consider weighting different elements of the tensors differently in order to reduce the effect of the large dynamic range of the tensor values.", "Specifically, we employ the GloVe objective function to our tensor model and minimize the objective function (2): L weighted = min U , W , Q NX i =1 NX j =1 K +1 X k =1 ijk ( h u i , w j , q k i + b Ui + b Wj + b Qk log( X ijk + 1) ) 2 , (2) where b Ui is the scalar bias for the word i in the matrix U .", "Similarly, b Wj is the bias for the word j in the matrix W , and b Qk for preposition k in the matrix Q .", "Bias terms are learned in such a way as to minimize the loss function.", "Here ijk is the weight assigned to each tensor element X ijk , and we use the weighting proposed by GloVe: ijk = min (cid:18)(cid:18) X ijk x max (cid:19) , 1 (cid:19) .", "We set the hyperparameters to be x max = 10 , and = 0 .", "75 in this work.", "We solve this optimization problem via standard gradient descent, arriving at word representations U and tensor representations Q .", "Representation Interpretation Suppose that we have a phrase ( h, p i , c ) where h , p i and c are the head word, the preposition i ( i K ) and the complement respectively.", "The inner product of the word vectors of h, p i and c reflects how frequently h and c co-occur in the context of p .", "It also reflects how cohesive the triple is.", "Recall that there is an extra ( K + 1) th slice that describes the word co-occurrences outside the preposition window, which considers cases such as the verb phrase ( v, c ) where v and c are the verb and its complement without a preposition in their shared context.", "Now consider a phrasal verb sparked off and a verb phrase with head prompted .", "For any complement word c that fits these two phrasesthe phrasal verb having h as its head verb and p i as its preposition, and the other, the verb phrase with v as its headwe can expect that h u h , q i , w c i h u v , q K +1 , w c i .", "In other words u h (cid:12) q i u v (cid:12) q K +1 , where a (cid:12) b denotes the pointwise multiplication (Hadamard product) of vectors a and b .", "This suggests that: (1) The vector q K +1 is a constant vector for all ( v, c ) pairs, and that (2) we could paraphrase the verb phrase ( h, p i ) by finding a verb v such that u v (cid:12) q K +1 is closest to u h (cid:12) q i .", "This shows that well-trained embeddings are able to capture the relation between phrasal verbs and their equivalent single verb forms.", "In Table 2, we list paraphrases of some verb phrases, which are generated from the weighted tensor decomposition.", "As can be seen, the tensor embedding gives reasonable paraphrasing, which 899 Phrase replied to blocked off put in pray for dreamed of sparked off Paraphrase answered intercepted place hope wanted prompted Phrase stuck with derived from switched over asked for passed down blend in Paraphrase stalled generated transferred requested delivered mix Table 2: Paraphrasing of prepositional phrases.", "validates that the trained embedding is interpretable in terms of lexical semantics.", "In the next two sections, we evaluate the proposed tensor-based preposition embeddings in the context of two important NLP downstream tasks: preposition selection and preposition attachment disambiguation.", "In this work, we use the English WikiCorpus (around 9 GB) as the training corpus for different sets of embeddings.", "We train tensor embeddings with both Orth-ALS and weighted decomposition.", "The implementation of Orth-ALS is built upon the SPLATT toolkit (Smith and Karypis, 2016).", "We perform orthogonalization in the first 5 iterations in Orth-ALS decomposition, and the training is completed when its performance stabilizes.", "As for the weighted decomposition, we train for 20 iterations, and its hyperparameters are set as x max = 10 , and = 0 .", "75 .", "We also include two baselines for comparison word2vec's CBOW model and GloVe.", "We set 20 training iterations for both the models.", "The hyperparameters in word2vec are set as: window size=6, negative sampling=25 and down-sampling=1e-4.", "The hyperparameters in GloVe are set as: window size=6, x max =10, =0.75 and minimum word count=5.", "We note that all the representations in this studyword2vec, GloVe and our tensor embeddingare of dimension 200.", "Grammatical error detection and correction constitute important tasks in NLP.", "Among grammatical errors, prepositional errors constitute about 13% of all errors, ranking second among the most common error types (Leacock et al., 2010).", "This is due to the fact that prepositions are highly polysemous and have idiosyncratic usage.", "Selecting a preposition depends on how well we can capture the interaction between a preposition and its context.", "Hence we choose this task to evaluate how well the lexical interactions are captured by different methods.", "Task .", "Given a sentence in English containing FCE # of sent 27119 # of prep 60279 Error ratio 4.8 CoNLL # of sent 1375 # of prep 3241 Error ratio 4.7 SE # of sent 5917 # of prep 15814 Error ratio 38.2 Table 3: Dataset statistics.", "a preposition, we either replace the preposition with the correct one or retain it.", "For example, in the sentence It can save the effort to carrying a lot of cards, to should be corrected as of.", "Formally, there is a closed set of preposition candidates P = { p 1 , . . . , p m } .", "A preposition p is used in a sentence s consisting of words s = { . . . , w 2 , w 1 , p, w 1 , w 2 , . . . } .", "If used incorrectly, we need to replace p by another preposition p P based on the context.", "Dataset .", "For training, we use the data from the Cambridge First Certificate in English (FCE) exam, just as used by the state-of-the-art on preposition error correction (Prokofyev et al., 2014).", "As for test data, we use two the CoNLL-2013 and the Stack Exchange (SE) datasets.", "The CoNLL dataset on preposition error correction was published by the CoNLL 2013 shared task (Ng et al., 2014), collected from 50 essays written by 25 nonnative English learners at a university.", "The SE dataset consists of texts generated by non-native speakers on the Stack Exchange website.", "Detailed statistics are shown in Table", "3. We focus on the most frequent 49 prepositions listed in Appendix A. Evaluation metric .", "Three metricsprecision, recall and F1 scoreare used to evaluate the preposition selection performance.", "Our algorithm .", "We first preprocess the dataset by removing articles, determiners and pronouns, and take a context window of 3 .", "We divide the task into two steps: error detection and error cor-900 Dataset Method Precision Recall F1 score CoNLL State-of-the-art 0.2592 0.3611 0.3017 Word2vec 0.1558 0.1579 0.1569 GloVe 0.1538 0.1578 0.1558 Our method (ALS) 0.3355 0.3355 0.3355 Our method (WD) 0.3590 0.3684 0.3636 SE State-of-the-art 0.2704 0.2961 0.2824 Word2vec 0.2450 0.2585 0.2516 GloVe 0.2454 0.2589 0.2520 Our method (ALS) 0.2958 0.3146 0.3049 Our method (WD) 0.2899 0.3055 0.2975 Table 4: Performance on preposition selection.", "rection.", "Firstly, we decide whether a preposition is used correctly in the context.", "If not, we suggest another preposition as replacement in the second step.", "The detection step uses only three features: the cosine similarity between the the current preposition embedding and the average context embedding, the rank of the preposition in terms of this cosine similarity, and the probability that this preposition is not changed in the training corpus.", "We build a decision tree classifier with these three features and find that we can identify errors with 98% F1 score in the CoNLL dataset and 96% in the SE dataset.", "For the error correction part, we only focus on the errors detected in the first stage.", "Suppose that the original preposition is q , and the candidate preposition is p with the embedding v p .", "The word vectors in the left context window are averaged as the left context embedding v , and the right vectors are averaged to give the right context embedding v r .", "We have the following features:", "2. Pair similarity between the preposition and the context: maximum of the similarity of the preposition between the left and the right context, i.e., pair sim = max (cid:16) v T v p k v k 2 k v p k 2 , v Tr v p k v r k 2 k v p k 2 (cid:17)", "; 3. Triple similarity = h v , v p , v r i k v k 3 k v p k 3 k v r k 3 ; 4. Confusion probability: the probability that q is replaced by p in the training data.", "A two-layer feed-forward neural network (FNN) with hidden layer sizes of 500 and 10 is trained with these features to score prepositions in each sentence.", "The preposition with the highest score is the suggested edit.", "Baseline .", "The state-of-the-art on preposition selection uses n-gram statistics from a large corpus (Prokofyev et al., 2014).", "Features such as point-wise mutual information (PMI) and part-of-speech tags are fed into a supervised scoring system.", "Given a sentence with a preposition to either replace or retain, the preposition with the highest score is chosen.", "The performance of the baseline is affected by both the system architecture and the features.", "To evaluate the benefits brought about by our tensor embedding-based features, we also consider other baselines which have the same two-step architecture whereas the features are generated from word2vec and GloVe embeddings.", "These baselines allow us to compare the representation power independent of the classifier.", "Result .", "We compare our proposed embedding-based method against baselines mentioned in Table", "4. We note that the proposed tensor embeddings achieve the best performance among all approaches.", "In particular, the tensor with weighted decomposition has the highest F1 score on the CoNLL dataseta 6% improvement over the state-of-the-art.", "However, the tensor with ALS decomposition performs the best on the SE dataset, achieving a 2% improvement over the state-of-the art.", "We also note that with the same architecture, tensor embeddings perform much better than word2vec and GloVe embeddings on both the datasets.", "This validates the representation power of tensor embeddings of prepositions.", "To get a deeper insight into the importance of the features in the preposition selection task, we also performed an ablation analysis of the tensor 901 Removed feature Left context embedding Prep embedding Right context embedding Pair similarity Triple similarity Confusion score CoNLL Precision 0.1558 0.2662 0.3117 0.3247 0.3247 0.3506 Recall 0.1579 0.2697 0.3158 0.3289 0.3289 0.3553 F1 score 0.1569 0.2680 0.3137 0.3268 0.3268 0.3529 SE Precision 0.2587 0.2796 0.2649 0.2658 0.2647 0.1993 Recall 0.2743 0.2964 0.2801 0.2818 0.2807 0.2114 F1 score 0.2663 0.2877 0.2726 0.2735 0.2725 0.2052 Table 5: Ablation analysis in preposition selection.", "method with weighted decomposition as shown in Table", "5. We find that the left context is the most important feature in for the CoNLL dataset, whereas the confusion score is the most important for the SE dataset.", "Pair similarity and triple similarity are less important when compared with the other features.", "This is because the neural network was able to learn the lexical similarity from the embedding features, thus reducing the importance of the similarity features.", "Discussion .", "Now we analyze different cases where our approach selects the wrong preposition.", "(1) Limited context window .", "We focus on the local context within a preposition's window.", "In some cases, we find that head words might be out of the context window.", "An instance of this is found in the sentence prevent more of this kind of tragedy to happening to should be corrected as from .", "Given the context window of 3 , we cannot get the lexical clues provided by prevent , which leads to the selection error.", "(2) Preposition selection requires more context .", "Even when the context window contains all the words on which the preposition depends, it still may not be sufficient to select the right one.", "For example, in the sentence it is controlled by some men in a bad pur-pose where our approach replaces the preposition in with the preposition on given the high frequency of the phrase on purpose.", "The correct preposition should be for based on the whole sentence.", "In this section, we discuss the task of prepositional phrase (PP) attachment disambiguation, a well-studied, but hard task in syntactic parsing.", "The PP attachment disambiguation inherently requires an accurate description of the interactions among the head, the preposition and the complement, which becomes an ideal task to evaluate our tensor-based embeddings.", "Task .", "The English dataset used in this work is collected from a linguistic treebank by (Be-linkov et al., 2015).", "It provides 35 , 359 training and 1 , 951 test instances.", "Each instance consists of several head candidates, a preposition and a complement word.", "The task is to pick the head to which the preposition attaches.", "In the example he saw an elephant with long tusks, the words saw and elephant are the candidate head words.", "Our algorithm .", "Let v h , v p and v c be embeddings for the head candidate h , preposition p and child c respectively.", "We then use the following features:", "1. Embedding feature: candidate head, preposition and complement embedding;", "2. Triple similarity: h v h , v p , v c i k v h k 3 k v p k 3 k v c k 3 ; 3. Head-preposition similarity: v Th v p k v h k 2 k v p k 2 ; 4. Head-child similarity: v Th v c k v h k 2 k v c k 2 ; 5. Part-of-speech (pos) tag of candidates and next words;", "We use a basic neural network, a two-layer feed-forward network (FNN) with hidden-layers of size 1000 and 20 , to take the input features and predict the probability that a candidate is the head.", "The candidate with the highest likelihood is chosen as the head.", "Baselines .", "For comparison, we include the following state-of-the-art approaches in preposition attachment disambiguation.", "The linguistic resources they used to enrich their features are listed in Table", "6. 902 Classifier HPCD (enriching) LRFR OntoLSTM FNN FNN FNN FNN Embedding method GloVe Word2vec Glove-extended Word2vec GloVe Our method (ALS) Our method (WD) Resources POS tag, WordNet, VerbNet POS tag, WordNet, VerbNet POS tag, WordNet POS tag POS tag POS tag POS tag Accuracy 0.887 0.903 0.897 0.866 0.858 0.883 0.892 Table 6: Accuracy in prepositional attachment disambiguation.", "(1) Head-Prep-Child-Dist (HPCD) Model (Be-linkov et al., 2015): this compositional neural network is used to train task-specific representations of prepositions.", "(2) Low-Rank Feature Representation (LRFR) (Yu et al., 2016): this method incorporates word parts, contexts and labels into a tensor, and uses decomposed vectors as features for disambiguation.", "(3) Ontology LSTM (OntoLSTM) (Dasigi et al., 2017): the vectors are initialized with GloVe, extended by AutoExtend (Rothe and Schutze, 2015), and trained via LSTMs for head selection.", "Similar to the experiments in the preposition selection task (see Section 4), we also include baselines which have the same feed-forward network architecture but generate features with vectors trained by word2vec and GloVe.", "They are denoted as FNN with different initializations in Table", "6. Since the attachment disambiguation is a selection task, accuracy is a natural evaluation metric.", "Result .", "We compare the results of the different approaches and the linguistic resources used in Table 6, where we see that our simple classifier built on the tensor representation is comparable in performance to the state-of-the-art (within 1% of the result).", "This result is notable considering that prior competitive approaches have used significant linguistic resources such as VerbNet and WordNet, whereas we use none.", "With the same feed-forward neural network as the classifier, our tensor-based approaches (both ALS and WD) achieve better performance than word2vec and GloVe.", "An ablation analysis that is provided in Table 7 shows that the head vector feature affects the performance the most (indicating that heads interact more closely with prepositions), and the POS tag feature comes second.", "The similarity features appear less important since the classifier has access to the lexical relatedness via the embedding features.", "Prior works have reported the importance of the distance feature since 81 .", "7% sentences take the word closest to the preposition as the head.", "In our experiments, the distance feature was found to be less important compared to the embedding features.", "Discussion .", "We found that one source of attachment disambiguation error is the lack of a broader context in our features.", "A broader context is critical in examples such as worked and system, which are head candidates of for trades in the sentence worked on a system for trading.", "They are reasonable heads in the expressions worked for trades and system for trades and further disambiguation requires a context larger than what we considered.", "Word representation .", "Word embeddings have been successfully used in many NLP applications.", "Word2vec (Mikolov et al., 2013) and GloVe (Pen-nington et al., 2014) show that embeddings can capture lexical semantics very well.", "Zhang et al. (2014) studied embeddings which can generalize different similarity perspectives when combined with corresponding linear transformations.", "Unlike other words, the crucial syntactic roles of prepositions in addition to their rich semantic meanings have been highlighted in prior works (Hovy et al., 2010; Schneider et al., 2015).", "Nevertheless, word representations specifically focused on prepositions are not available and to the best of our knowledge, ours is the first work exploring this intriguing direction.", "Tensor Decomposition .", "Tensors embed higher order interaction among different modes, and the tensor decomposition captures this interaction via lower dimensional representations.", "There are several decomposition methods such as Alternating Least Square (ALS) (Kolda and Bader, 2009), Si-903 Removed feature Head vector Prep vector Child vector Head-prep similarity Head-child similarity Triple similarity POS Distance Accuracy 0.843 0.871 0.880 0.877 0.885 0.873 0.850 0.872 Table 7: Ablation analysis in preposition attachment disambiguation.", "multaneous Diagonalization (SD) (Kuleshov et al., 2015) and optimization-based methods (Liu and Nocedal, 1989; More, 1978).", "Orthogonalized Alternating Least Square (Orth-ALS) adds the step of component orthogonalization to each update step in the ALS method (Sharan and Valiant, 2017).", "Owing to its theoretical guarantees and, more relevantly due to its good empirical performance, Orth-ALS is the algorithm of choice in this paper.", "Preposition Selection .", "Preposition selection, an important area of study in computational linguistics, is also a very practical topic in the context of grammar correction and second language learning.", "Prior works have used hand-crafted heuristic rules (Xiang et al., 2013), n-gram features (Proko-fyev et al., 2014; Rozovskaya et al., 2013), and by the use of POS tags and dependency relations to enrich other features (Kao et al., 2013)all toward addressing preposition error correction.", "Prepositional Attachment Disambiguation .", "There is a storied literature on prepositional attachment disambiguation, long recognized as an important part of syntactic parsing (Kiperwasser and Goldberg, 2016).", "Recent works, based on word embeddings have pushed the boundary of state of the art empirical results.", "A seminal work in this direction is the Head-Prep-Child-Dist Model, which trained embeddings in a compositional network to maximize the accuracy of head prediction (Belinkov et al., 2015).", "The performance has been further improved in conjunction with semantic and syntactic features.", "A recent work has proposed an initialization with semantics-enriched GloVe embeddings, and retrained representations with LSTM-RNNs (Dasigi et al., 2017).", "Another recent work has used tensor decompositions to capture the relation between word representations and their labels (Yu et al., 2016).", "Co-occurrence counts of word pairs in sentences and the resulting word vector representations (em-beddings) have revolutionalized NLP research.", "A natural generalization is to consider co-occurrence counts of word triples, resulting in a third order tensor.", "Partly due to the size of the tensor (a vocabulary of 1M, leads to a tensor with 10 18 entries!) and partly due to the extreme dynamic range of entries (including sparsity), word vector representations via tensor decompositions have largely been inferior to their lower order cousins (i.e., regular word embeddings).", "In this work, we trek this well-trodden but arduous terrain by restricting word triples to the scenario when one of the words is a preposition.", "This is linguistically justified, since prepositions are understood to model interactions between pairs of words.", "Numerically, this is also very well justi-fied since the sparsity and dynamic range of the resulting tensor is no worse than the original matrix of pairwise co-occurrence counts; this is because prepositions are very frequent and co-occur with essentially every word in the vocabulary.", "Our intrinsic evaluations and new state-of-the-art results in downstream evaluations lend strong credence to the tensor-based approach to prepositional representation.", "We expect our vector representations of prepositions to be widely used in more complicated downstream NLP tasks where prepositional role is crucial, including text to pro-grams (Guu et al., 2017).", "This work is supported by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) a research collaboration as part of the IBM AI Horizons Network." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "method", "result", "abstain", "objective", "objective", "result", "abstain", "objective", "method", "method", "abstain", "abstain", "abstain", "objective", "result", "result", "abstain", "abstain", "abstain", "method", "result", "result", "abstain", "method", "abstain", "abstain", "abstain", "method", "result", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other" ]
[ "Understanding privacy policies is crucial for users as it empowers them to learn about the information that matters to them.", "Sentences written in a privacy policy document explain privacy practices, and the constituent text spans convey further specific information about that practice.", "We refer to predicting the privacy practice explained in a sentence as intent classification and identifying the text spans sharing specific information as slot filling.", "In this work, we propose PolicyIE , an English corpus consisting of 5,250 intent and 11,788 slot annotations spanning 31 privacy policies of websites and mobile applications.", "PolicyIE corpus is a challenging real-world benchmark with limited labeled examples reflecting the cost of collecting large-scale annotations from domain experts.", "We present two alternative neural approaches as baselines, (1) intent classification and slot filling as a joint sequence tagging and (2) modeling them as a sequence-to-sequence (Seq2Seq) learning task.", "The experiment results show that both approaches perform comparably in intent classification, while the Seq2Seq method outperforms the sequence tagging approach in slot filling by a large margin.", "We perform a detailed error analysis to reveal the challenges of the proposed corpus.", "Privacy policies inform users about how a service provider collects, uses, and maintains the users' information.", "The service providers collect the users' data via their websites or mobile applications and analyze them for various purposes.", "The users' data often contain sensitive information; therefore, the users must know how their information will be used, maintained, and protected from unauthorized and unlawful use.", "Privacy policies are meant to explain all these use cases in detail.", "This makes Equal contribution.", "privacy policies often very long, complicated, and confusing (McDonald and Cranor, 2008; Reidenberg et al., 2016).", "As a result, users do not tend to read privacy policies (Commission et al., 2012; Gluck et al.; Marotta-Wurgler, 2015), leading to undesirable consequences.", "For example, users might not be aware of their data being sold to third-party advertisers even if they have given their consent to the service providers to use their services in return.", "Therefore, automating information extraction from verbose privacy policies can help users understand their rights and make informed decisions.", "In recent years, we have seen substantial efforts to utilize natural language processing (NLP) techniques to automate privacy policy analysis.", "In literature, information extraction from policy documents is formulated as text classification (Wil-son et al., 2016a; Harkous et al., 2018; Zimmeck et al., 2019), text alignment (Liu et al., 2014; Ramanath et al., 2014), and question answering (QA) (Shvartzshanider et al., 2018; Harkous et al., 2018; Ravichander et al., 2019; Ahmad et al., 2020).", "Although these approaches effectively identify the sentences or segments in a policy document relevant to a privacy practice, they lack in extracting fine-grained structured information.", "As shown in the first example in Table 1, the privacy practice label Data Collection/Usage informs the user how, why, and what types of user information will be collected by the service provider.", "The policy also specifies that users' username and icon or profile photo will be used for marketing purposes.", "This informs the user precisely what and why the service provider will use users' information.", "The challenge in training models to extract fine-grained information is the lack of labeled examples.", "Annotating privacy policy documents is expensive as they can be thousands of words long and requires domain experts (e.g., law students).", "Therefore, prior works annotate privacy policies at the [We] Data Collector: First Party Entity may also [use] Action or display [your] Data Provider: user [username] Data Collected: User Online Activities/Profiles and [icon or profile photo] Data Collected: User Online Activities/Profiles on [marketing purpose or press releases] Purpose: Advertising/Marketing .", "sentence level, without further utilizing the constituent text spans to convey specific information.", "Sentences written in a policy document explain privacy practices, which we refer to as intent classification and identifying the constituent text spans that share further specific information as slot filling .", "Table 1 shows a couple of examples.", "This formulation of information extraction lifts users' burden to comprehend relevant segments in a policy document and identify the details, such as how and why users' data are collected and shared with others.", "To facilitate fine-grained information extraction, we present PolicyIE , an English corpus consisting of 5,250 intent and 11,788 slot annotations over 31 privacy policies of websites and mobile applications.", "We perform experiments using sequence tagging and sequence-to-sequence (Seq2Seq) learning models to jointly model intent classification and slot filling.", "The results show that both modeling approaches perform comparably in intent classification, while Seq2Seq models outperform the sequence tagging models in slot filling by a large margin.", "We conduct a thorough error analysis and categorize the errors into seven types.", "We observe that sequence tagging approaches miss more slots while Seq2Seq models predict more spurious slots.", "We further discuss the error cases by considering other factors to help guide future work.", "We release the code and data to facilitate research.", "1 2 Construction of PolicyIE Corpus 2.1 Privacy Policies Selection The scope of privacy policies primarily depends on how service providers function.", "For example, service providers primarily relying on mobile applications (e.g., Viber, Whatsapp) or websites and applications (e.g., Amazon, Walmart) have different privacy practices detailed in their privacy policies.", "In PolicyIE , we want to achieve broad coverage across privacy practices exercised by the service providers such that the corpus can serve a wide variety of use cases.", "Therefore, we go through the following steps to select the policy documents.", "Initial Collection Ramanath et al. (2014) introduced a corpus of 1,010 privacy policies of the top websites ranked on Alexa.com .", "We crawled those websites' privacy policies in November 2019 since the released privacy policies are outdated.", "For mobile application privacy policies, we scrape application information from Google Play Store using play-scraper public API 2 and crawl their privacy policy.", "We ended up with 7,500 mobile applications' privacy policies.", "Filtering First, we filter out the privacy policies written in a non-English language and the mobile applications' privacy policies with the app review rating of less than 4.5.", "Then we filter out privacy policies that are too short ( < 2,500 words) or too long ( > 6,000 words).", "Finally, we randomly select 200 websites and mobile application privacy policies each (400 documents in total).", "3 Post-processing We ask a domain expert (work-ing in the security and privacy domain for more than three years) to examine the selected 400 privacy policies.", "The goal for the examination is to ensure the policy documents cover the four privacy practices: (1) Data Collection/Usage , (2) Data Sharing/Disclosure , (3) Data Storage/Retention , and (4) Data Security/Protection .", "These four practices cover how a service provider processes users' data in general and are included in the General Data Protection Regulation (GDPR).", "Finally, we shortlist 50 policy documents for annotation, 25 in each category (websites and mobile applications).", "2 https://github.com/danieliu/ play-scraper 3 We ensure the mobile applications span different application categories on the Play Store.", "Annotation Schema To annotate sentences in a policy document, we consider the first four privacy practices from the annotation schema suggested by Wilson et al. (2016a).", "Therefore, we perform sentence categorization under five intent classes that are described below.", "(1) Data Collection/Usage : What, why and how user information is collected; (2) Data Sharing/Disclosure : What, why and how user information is shared with or collected by third parties; (3) Data Storage/Retention : How long and where user information will be stored; (4) Data Security/Protection : Protection measures for user information; (5) Other : Other privacy practices that do not fall into the above four categories.", "Apart from annotating sentences with privacy practices, we aim to identify the text spans in sentences that explain specific details about the practices.", "For example, in the sentence we collect personal information in order to provide users with a personalized experience , the underlined text span conveys the purpose of data collection.", "In our annotation schema, we refer to the identification of such text spans as slot filling .", "There are 18 slot labels in our annotation schema (provided in Appendix).", "We group the slots into two categories: type-I and type-II based on their role in privacy practices.", "While the type-I slots include participants of privacy practices, such as Data Provider , Data Receiver , type-II slots include purposes, conditions that characterize more details of privacy practices.", "Note that type-I and type-II slots may overlap, e.g., in the previous example, the underlined text span is the purpose of data collection, and the span user is the Data Provider (whose data is collected).", "In general, type-II slots are longer (consisting of more words) and less frequent than type-I slots.", "In total, there are 14 type-I and 4 type-II slots in our annotation schema.", "These slots are associated with a list of attributes, e.g., Data Collected and Data Shared have the attributes Contact Data , Location Data , Demographic Data , etc.", "Table 1 illustrates a couple of examples.", "We detail the slots and their attributes in the Appendix.", "Annotation Procedure General crowdworkers such as Amazon Mechanical Turkers are not suitable to annotate policy documents as it requires specialized domain knowledge (McDonald and Cra-Dataset Train Test # Policies 25 6 # Sentences 4,209 1,041 # Type-I slots 7,327 1,704 # Type-II slots 2,263 494 Avg. sentence length 23.73 26.62 Avg. # type-I slot / sent. 4.48 4.75 Avg. # type-II slot / sent. 1.38 1.38 Avg. type-I slot length 2.01 2.15 Avg. type-II slot length 8.70 10.70 Table 2: Statistics of the PolicyIE Corpus. nor, 2008; Reidenberg et al., 2016).", "We hire two law students to perform the annotation.", "We use the web-based annotation tool, BRAT (Stenetorp et al., 2012) to conduct the annotation.", "We write a detailed annotation guideline and pretest them through multiple rounds of pilot studies.", "The guideline is further updated with notes to resolve complex or corner cases during the annotation process.", "4 The annotation process is closely monitored by a domain expert and a legal scholar and is granted IRB exempt by the Institutional Review Board (IRB).", "The annotators are presented with one segment from a policy document at a time and asked to perform annotation following the guideline.", "We manually segment the policy documents such that a segment discusses similar issues to reduce ambiguity at the annotator end.", "The annotators worked 10 weeks, with an average of 10 hours per week, and completed annotations for 31 policy documents.", "Each annotator is paid $15 per hour.", "Post-editing and Quality Control We compute an inter-annotator agreement for each annotated segment of policy documents using Krippendorff's Alpha ( K ) (Klaus, 1980).", "The annotators are asked to discuss their annotations and re-annotate those sections with token-level K falling below 0.75.", "An K value within the range of 0.67 to 0.8 is allowed for tentative conclusions (Artstein and Poesio, 2008; Reidsma and Carletta, 2008).", "After the re-annotation process, we calculate the agreement for the two categories of slots individually.", "The inter-annotator agreement is 0.87 and 0.84 for type-I and type-II slots, respectively.", "Then the adjudicators discuss and finalize the annotations.", "The adjudication process involves one of the annotators, the legal scholar, and the domain expert.", "4 We release the guideline as supplementary material.", "Data Statistics & Format Table 2 presents the statistics of the PolicyIE corpus.", "The corpus consists of 15 and 16 privacy policies of websites and mobile applications, respectively.", "We release the annotated policy documents split into sentences.", "5 Each sentence is associated with an intent label, and the constituent words are associated with a slot label (following the BIO tagging scheme).", "PolicyIE provides annotations of privacy practices and corresponding text spans in privacy policies.", "We refer to privacy practice prediction for a sentence as intent classification and identifying the text spans as slot filling .", "We present two alternative approaches; the first approach jointly models intent classification and slot tagging (Chen et al., 2019), and the second modeling approach casts the problem as a sequence-to-sequence learning task (Rongali et al., 2020; Li et al., 2021).", "Following Chen et al. (2019), given a sentence s = w 1 , . . . , w l from a privacy policy document D , a special token ( w 0 = [ CLS ] ) is prepended to form the input sequence that is fed to an encoder.", "The encoder produces contextual representations of the input tokens h 0 , h 1 , . . . , h l where h 0 and h 1 , . . . , h l are fed to separate softmax classifiers 5 We split the policy documents into sentences using UD-Pipe (Straka et al., 2016).", "to predict the target intent and slot labels.", "y i = softmax ( W Ti h 0 + b i ) , y sn = softmax ( W Ts h n + b s ) , n 1 , . . . l, where W i R d I , W s R d S , b r RI and b i RI , b s RS are parameters, and I, S are the total number of intent and slot types, respectively.", "The sequence tagging model (composed of an encoder and a classifier) learns to maximize the following conditional probability to perform intent classification and slot filling jointly.", "We train the models end-to-end by minimizing the cross-entropy loss.", "Table 3 shows an example of input and output to train the joint intent and slot tagging models.", "Since type-I and type-II slots have different characteristics as discussed in 2.2 and overlap, we train two separate sequential tagging models for type-I and type-II slots to keep the baseline models simple.", "6 We use BiLSTM (Liu and Lane, 2016; Zhang and Wang, 2016), Transformer (Vaswani et al., 2017), BERT (Vaswani et al., 2017), and RoBERTa (Liu et al., 2019) as encoder to form the sequence tagging models.", "Besides, we consider an embedding based baseline where the input word embeddings are fed to the softmax classifiers.", "The special token ( w 0 = 6 Span enumeration based techniques (Wadden et al., 2019; Luan et al., 2019) can be utilized to perform tagging both types of slots jointly, and we leave this as future work.", "[ CLS ] ) embedding is formed by applying average pooling over the input word embeddings.", "We train WordPiece embeddings with a 30,000 token vocabulary (Devlin et al., 2019) using fastText (Bo-janowski et al., 2017) based on a corpus of 130,000 privacy policies collected from apps on the Google Play Store (Harkous et al., 2018).", "We use the hidden state corresponding to the first WordPiece of a token to predict the target slot labels.", "Conditional Random Field (CRF) helps structure prediction tasks, such as semantic role labeling (Zhou and Xu, 2015) and named entity recognition (Cotterell and Duh, 2017).", "Therefore, we model slot labeling jointly using a conditional random field (CRF) (Lafferty et al., 2001) (only interactions between two successive labels are considered).", "We refer the readers to Ma and Hovy (2016) for details.", "Recent works in semantic parsing (Rongali et al., 2020; Zhu et al., 2020; Li et al., 2021) formulate the task as sequence-to-sequence (Seq2Seq) learning.", "Taking this as a motivation, we investigate the scope of Seq2Seq learning for joint intent classification and slot filling for privacy policy sentences.", "In Table 3, we show an example of encoder input and decoder output used in Seq2Seq learning.", "We form the target sequences by following the template: [IN:LABEL [SL:LABEL w 1 , . . . , w m ] . . . ].", "During inference, we use greedy decoding and parse the decoded sequence to extract intent and slot labels.", "Note that we only consider text spans in the decoded sequences that are surrounded by []; the rest are discarded.", "Since our proposed PolicyIE corpus consists of a few thousand examples, instead of training Seq2Seq models from scratch, we fine-tune pre-trained models as the baselines.", "Specifically, we consider five state-of-the-art models: MiniLM (Wang et al., 2020), UniLM (Dong et al., 2019), UniLMv2 (Bao et al., 2020), MASS (Song et al.), and BART (Lewis et al., 2020).", "Implementation We use the implementation of BERT and RoBERTa from transformers API (Wolf et al., 2020).", "For the Seq2Seq learning baselines, we use their public implementations.", "7,8,9 We train BiLSTM, Transformer baseline models and fine-tune all the other baselines for 20 epochs and choose the best checkpoint based on validation performance.", "From 4,209 training examples, we use 4,000 examples for training ( 95%) and 209 examples for validation ( 5%).", "We tune the learning rate in [1e-3, 5e-4, 1e-4, 5e-5, 1e-5] and set the batch size to 16 in all our experiments (to fit in one GeForce GTX 1080 GPU with 11gb memory).", "We train (or fine-tune) all the models five times with different seeds and report average performances.", "Evaluation Metrics To evaluate the baseline approaches, we compute the F1 score for intent classification and slot filling tasks.", "10 We also compute an exact match (EM) accuracy (if the predicted intent matches the reference intent and slot F1 = 1.0).", "7 https://github.com/microsoft/unilm 8 https://github.com/microsoft/MASS 9 https://github.com/pytorch/fairseq/ tree/master/examples/bart 10 We use a micro average for intent classification.", "Human Performance is computed by considering each annotator's annotations as predictions and the adjudicated annotations as the reference.", "The final score is an average across all annotators.", "We aim to address the following questions.", "1. How do the two modeling approaches perform on our proposed dataset ( 4.1)?", "2. How do they perform on different intent and slot types ( 4.2)?", "3. What type of errors do the best performing models make ( 4.3)?", "Sequence Tagging The overall performances of the sequence tagging models are presented in Table", "4. The pre-trained models, BERT and RoBERTa, outperform other baselines by a large margin.", "Using conditional random field (CRF), the models boost the slot tagging performance with a slight degradation in intent classification performance.", "For example, RoBERTa + CRF model improves over RoBERTa by 2.8% and 3.9% in terms of type-I slot F1 and EM with a 0.5% drop in intent F1 score.", "The results indicate that predicting type-II slots is difficult compared to type-I slots as they differ in length (type-I slots are mostly phrases, while type-II slots are clauses) and are less frequent in the training examples.", "However, the EM accuracy for type-I slots is lower than type-II slots due to more type-I slots ( 4.75) than type-II slots ( 1.38) on average per sentence.", "Note that if models fail to predict one of the slots, EM will be zero.", "Seq2Seq Learning Seq2Seq models predict the intent and slots by generating the labels and spans following a template.", "Then we extract the intent and slot labels from the generated sequences.", "The experiment results are presented in Table", "5. To our surprise, we observe that all the models perform well in predicting intent and slot labels.", "The best performing model is BART (according to slot F1 score) with 400 million parameters, outperforming its smaller variant by 10.1% and 2.8% in terms of slot F1 for type-I and type-II slots, respectively.", "Sequence Tagging vs. Seq2Seq Learning It is evident from the experiment results that Seq2Seq models outperform the sequence tagging models in slot filling by a large margin, while in intent classification, they are competitive.", "However, both the modeling approaches perform poorly in predicting all the slots in a sentence correctly, resulting in a lower EM score.", "One interesting factor is, the Seq2Seq models significantly outperform sequence tagging models in predicting type-II slots.", "Note that type-II slots are longer and less frequent, and we suspect conditional text generation helps Seq2Seq models predict them accurately.", "In comparison, we suspect that due to fewer labeled examples of type-II slots, the sequence tagging models perform poorly on that category (as noted before, we train the sequence tagging models for the type-I and type-II slots individually).", "Next, we break down RoBERTa (w/ CRF) and BART's performances, the best performing models in their respective model categories, followed by an error analysis to shed light on the error types.", "Intent Classification In the PolicyIE corpus, 38% of the sentences fall into the first four categories: Data Collection, Data Sharing, Data Storage, Data Security, and the remaining belong to the Other category.", "Therefore, we investigate how much the models are confused in predicting the accurate intent label.", "We provide the confusion matrix of the models in Appendix.", "Due to an im-balanced distribution of labels, BART makes many Intent labels Intent F1 Slot F1 Type-I Type-II RoBERTa Data Collection 74.1 1 .", "incorrect predictions.", "We notice that BART is confused most between Data Collection and Data Storage labels.", "Our manual analysis reveals that BART is confused between slot labels { Data Collector, Data Holder } and { Data Retained, Data Col-lected } as they are often associated with the same text span.", "We suspect this leads to BART's confusion.", "Table 6 presents the performance breakdown across intent labels.", "Slot Filling We breakdown the models' performances in slot filling under two settings.", "First, Table 6 shows slot filling performance under different intent categories.", "Among the four classes, the models perform worst on slots associated with the Data Security intent class as PolicyIE has the lowest amount of annotations for that intent category.", "Second, we demonstrate the models' performances on different slot types in Figure", "1. RoBERTa's recall score for polarity, protect-against, protection-method and storage-place slot types is zero.", "This is because these slot types have the lowest amount of training examples in PolicyIE .", "On the other hand, BART achieves a higher recall score, specially for the polarity label as their corresponding spans are short.", "We also study the models' performances on slots of different lengths.", "The results show that BART outperforms RoBERTa by a larger margin on longer slots (see Figure 2), corroborating our hypothesis that conditional text generation results in more accurate predictions for longer spans.", "We analyze the incorrect intent and slot predictions by RoBERTa and BART.", "We categorize the errors 0.0 0.2 0.4 0.6 0.8 action condition data-collecteddata-collectordata-holderdata-protecteddata-protectordata-providerdata-receiverdata-retaineddata-shareddata-sharerpolarityprotect-against protection-methodpurposeretention-periodstorage-place RoBERTa BART Figure 1: Test set performance (Recall score) on PolicyIE for the eighteen slot types.", "into seven types.", "Note that a predicted slot is considered correct if its' label and span both match ( exact match ) one of the references.", "We characterize the error types as follows.", "1. Wrong Intent (WI) : The predicted intent label does not match the reference intent label.", "2. Missing Slot (MS) : None of the predicted slots exactly match a reference slot.", "3. Spurious Slot (SS) : Label of a predicted slot does not match any of the references.", "4. Wrong Split (WSp) : Two or more predicted slot spans with the same label could be merged to match one of the reference slots.", "A merged span and a reference span may only differ in punctuations or stopwords (e.g., and).", "5. Wrong Boundary (WB) : A predicted slot span is a sub-string of the reference span or vice versa.", "The slot label must exactly match.", "6. Wrong Label (WL) : A predicted slot span matches a reference, but the label does not.", "7. Wrong Slot (WS) : All other types of errors fall into this category.", "We provide one example of each error type in Table", "7. In Table 8, we present the counts for each error type made by RoBERTa and BART models.", "The two most frequent error types are SS and MS. While BART makes more SS errors, RoBERTa suffers from MS errors.", "While both the models are similar in terms of total errors, BART makes more correct predictions resulting in a higher Recall score, as discussed before.", "One possible way to reduce SS errors is by penalizing more on wrong slot label prediction than slot span.", "On the other hand, reducing MS errors is more challenging as many missing slots have fewer annotations than others.", "We provide more qualitative examples in Appendix (see Table 11 and 12) .", "In the error analysis, we exclude the test examples (sentences) with the intent label Other and no slots.", "Out of 1,041 test instances in PolicyIE , there are 682 instances with the intent label Other.", "We analyze RoBERTa and BART's predictions on those examples separately to check if the models predict slots as we consider them as spurious slots.", "While RoBERTa meets our expectation of performing highly accurate (correct prediction for 621 out of 682), BART also correctly predicts 594 out of 682 by precisely generating [IN:Other].", "Overall the error analysis aligns with our anticipation that the Seq2Seq modeling technique has promise and should be further explored in future works.", "Automated Privacy Policy Analysis Automating privacy policy analysis has drawn researchers' attention as it enables the users to know their rights and act accordingly.", "Therefore, significant research efforts have been devoted to understanding privacy policies.", "Earlier approaches (Costante et al., 2012) designed rule-based pattern matching techniques to extract specific types of information.", "Under the Usable Privacy Project (Sadeh et al., 2013), several works have been done (Bhatia and Breaux, 2015; Wilson et al., 2016a,b; Sathyendra et al., 2016; Bhatia et al., 2016; Hosseini et al., 2016; Mysore Sathyendra et al., 2017; Zimmeck et al., 2019; Bannihatti Kumar et al., 2020).", "Notable works leveraging NLP techniques include text alignment (Liu et al., 2014; Ramanath et al., 2014), text classification (Wilson et al., 2016a; Harkous et al., 2018; Zimmeck et al., 2019), and question answering (QA) (Shvartzshanider et al., 2018; Harkous et al., 2018; Ravichander et al., 2019; Ahmad et al., 2020).", "Bokaie Hosseini et al. (2020) is the most closest to our work that used named entity recognition (NER) modeling technique to extract third party entities mentioned in policy documents.", "Our proposed PolicyIE corpus is distinct from the previous privacy policies benchmarks: OPP-115 (Wilson et al., 2016a) uses a hierarchical annotation scheme to annotate text segments with a set of data practices and it has been used for multi-label classification (Wilson et al., 2016a; Harkous et al., 2018) and question answering (Harkous et al., 2018; Ahmad et al., 2020); PrivacyQA (Ravichan-der et al., 2019) frame the QA task as identifying a list of relevant sentences from policy documents.", "Recently, Bui et al. (2021) created a dataset by tagging documents from OPP-115 for privacy practices and uses NER models to extract them.", "In contrast, PolicyIE is developed by following semantic parsing benchmarks, and we model the task following the NLP literature.", "Intent Classification and Slot Filling Voice assistants and chat-bots frame the task of natural language understanding via classifying intents and filling slots given user utterances.", "Several benchmarks have been proposed in literature covering several domains, and languages (Hemphill et al., 1990; Coucke et al., 2018; Gupta et al., 2018; Upadhyay et al., 2018; Schuster et al., 2019; Xu et al., 2020; Li et al., 2021).", "Our proposed PolicyIE corpus is a new addition to the literature within the security and privacy domain.", "PolicyIE enables us to build conversational solutions that users can interact with and learn about privacy policies.", "This work aims to stimulate research on automating information extraction from privacy policies and reconcile it with users' understanding of their rights.", "We present PolicyIE , an intent classification and slot filling benchmark on privacy policies with two alternative neural approaches as baselines.", "We perform a thorough error analysis to shed light on the limitations of the two baseline approaches.", "We hope this contribution would call for research efforts in the specialized privacy domain from both privacy and NLP communities.", "The authors acknowledge the law students Michael Rasmussen and Martyna Glaz at Fordham University who worked as annotators to make the development of this corpus possible.", "This work was supported in part by National Science Foundation Grant OAC 1920462.", "Any opinions, findings, conclusions, or recommendations expressed herein are those of the authors, and do not necessarily reflect those of the US Government or NSF.", "Privacy and data breaches have a significant impact on individuals.", "In general, security breaches expose the users to different risks such as finan-cial loss (due to losing employment or business opportunities), physical risks to safety, and identity theft.", "Identity theft is among the most severe and fastest-growing crimes.", "However, the risks due to data breaches can be minimized if the users know their rights and how they can exercise them to protect their privacy.", "This requires the users to read the privacy policies of websites they visit or the mobile applications they use.", "As reading privacy policies is a tedious task, automating privacy policy analysis reduces the burden of users.", "Automating information extraction from privacy policies empowers users to be aware of their data collected and analyzed by service providers for different purposes.", "Service providers collect consumer data at a massive scale and often fail to protect them, resulting in data breaches that have led to increased attention towards data privacy and related risks.", "Reading privacy policies to understand users' rights can help take informed and timely decisions on safeguarding data privacy to mitigate the risks.", "Developing an automated solution to facilitate policy document analysis requires labeled examples, and the PolicyIE corpus adds a new dimension to the available datasets in the security and privacy domain.", "While PolicyIE enables us to train models to extract fine-grained information from privacy policies, the corpus can be coupled with other existing benchmarks to build a comprehensive system.", "For example, PrivacyQA corpus (Ravichander et al., 2019) combined with PolicyIE can facilitate building QA systems that can answer questions with fine-grained details.", "We believe our experiments and analysis will help direct future research." ]
[ "abstain", "abstain", "method", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "objective", "result", "method", "abstain", "abstain", "abstain", "result", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "other", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "other", "other", "objective", "method", "objective", "method", "method", "objective", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method" ]
[ "We examine a methodology using neural language models (LMs) for analyzing the word order of language.", "This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods.", "In this study, we explore whether the LM-based method is valid for analyzing the word order.", "As a case study, this study focuses on Japanese due to its complex and flexible word order.", "To validate the LM-based method, we test", "(i) parallels between LMs and human word order preference, and", "(ii) consistency of the results obtained using the LM-based method with previous linguistic studies.", "Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool.", "Finally, using the LM-based method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by large-scale experiments.", "Even for such a particular alternation, several studies (Bresnan et al., 2007; Hovav and Levin, 2008; Colleman, 2009) investigated the factors determining this word order and found that the choice is not random.", "For analyzing such linguistic phenomena, linguists repeat the cycle of constructing hypotheses and testing their validity, usually through psychological experiments or count-based methods.", "However, these approaches sometimes face difficulties, such as scalability issues in psychological 0.0002 0.0000001 generationprobabilities order 1 is more likely.", "Compared to the typical approaches for evaluating linguistic hypotheses, approaches using LMs have potential advantages (Section 3.2).", "In this study, we examine the methodology of using LMs for analyzing word order (Figure 1).", "To validate the LM-based method, we first examine if there is a parallel between canonical word order and generation probability of LMs for each word order.", "Futrell and Levy (2019) reported that English LMs have human-like word order preferences, which can be one piece of evidence for validating the LM-based method.", "However, it is not clear whether the above assumption is valid even in languages with more flexible word order.", "In this study, we specifically focus on the Japanese language due to its complex and flexible word order.", "There are many claims on the canonical word order of Japanese, and it has attracted considerable attention from linguists and natural language processing (NLP) researchers for decades (Hoji, 1985; Saeki, 1998; Miyamoto, 2002; Matsuoka, 2003; Koizumi and Tamaoka, 2004; Nakamoto et al., 2006; Shigenaga, 2014; Sasano and Okumura, 2016; Orita, 2017; Asahara et al., 2018).", "We investigated the validity of using Japanese LMs for canonical word order analysis by conducting two sets of experiments:", "(i) comparing word order preference in LMs to that in Japanese speakers (Section 4), and", "(ii) checking the consistency Topic Time Location Subject (Adverb) Indirect object Direct object Verb Notation TOP TIM LOC NOMDAT ACC Typical particle", "between the preference of LMs with previous linguistic studies", "(Section 5).", "From our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool, and further explore potential applications.", "Finally, we analyzed the relationship between topicalization and word order of Japanese by taking advantage of the LM-based method", "(Section 6).", "Discuss and validate the use of LMs as a tool for word order analysis as well as investigate the sensitivity of LMs against different word orders in non-European language", "(Section 3); Find encouraging parallels between the results obtained with the LM-based method and those with the previously established method on various hypotheses of canonical word order of Japanese", "(Sections 4 and 5); and Showcase the advantages of an LM-based method through analyzing linguistic phenomena that is difficult to explore with the previous data-driven methods", "(Section 6).", "This section provides a brief overview of the linguistic background of canonical word order, some basics of Japanese grammar, and common methods of linguistic analysis.", "Every language is assumed to have a canonical word order, even those with flexible word order", "(Comrie, 1989).", "There has been a significant linguistic effort to reveal the factors determining the canonical word order", "(Bresnan et al., 2007; Hoji, 1985).", "The motivations for revealing the canonical word order range from linguistic interests to those involved in various other fieldsit relates to language acquisition and production in psycholinguistics", "(Slobin and Bever, 1982; Akhtar, 1999), second language education", "(Alonso Belmonte et al., 2000), and natural language generation", "(Visweswariah et al., 2011)", "or error correction", "(Cheng et al., 2014)", "in NLP.", "In Japanese, there are also many studies on its canonical word order", "(Hoji, 1985; Saeki, 1998; Koizumi and Tamaoka, 2004; Sasano and Okumura, 2016).", "Japanese canonical word order The word order of Japanese is basically subject-object-verb", "(SOV)", "order, but there is no strict rule except placing the verb at the end of the sentence", "(Tsujimura, 2013).", "For example, the following three sentences have the same denotational meaning", "( A teacher gave a student a book. ):", "(2)", "a.", "..............", "(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)", ".", "teacherNOM studentDAT bookACC gave.", "b.", "..............", "", "(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)", ".", "teacherNOM bookACC studentDAT gave.", "c.", "(cid:58)(cid:58)(cid:58)(cid:58)(cid:58)", "", "..............", ".", "bookACC studentDAT teacherNOM gave.", "This order-free nature suggests that the position of each constituent does not represent its semantic role", "(case).", "Instead, postpositional case particles indicate the roles.", "Table 1 shows typical constituents in a Japanese sentence, their postpositional particles, their canonical order, and the sections of this paper where each of them is analyzed.", "Note that postpositional case particles are sometimes omitted or replaced with other particles such as adverbial particles", "(Section 6).", "These characteristics complicate the factors determining word order, which renders the automatic analysis of Japanese word order difficult.", "There are two main methods in linguistic research: human-based methods, which observe human reactions, and data-driven methods, which analyze text corpora.", "Human-based methods A typical approach of testing word order hypotheses is observing the reaction", "(e.g., reading time)", "of humans to each word order", "(Shigenaga, 2014; Bahlmann et al., 2007).", "These approaches are based on the direct observation of humans, but this method has scalability issues.", "Data-driven methods Another typical approach is counting the occurrence frequencies of the targeted phenomena in a large corpus.", "This count-based method is based on the assumption that there are parallels between the canonical word order and the frequency of each word order in a large corpus.", "The parallel has been widely discussed", "(Arnon and Snider, 2010; Bresnan et al., 2007), and many studies rely on this assumption", "(Sasano and Okumura, 2016; Kempen and Harbusch, 2004).", "One of the advantages of this approach is suitability for large-scale experiments.", "This enables considering a large number of examples.", "In this method, researchers often have to identify the phenomena of interest with preprocessors", "(e.g., the predicate-argument structure parser used by Sasano and Okumura", "(2016))", "in order to count them.", "However, sometimes, identification of the targeted phenomena is difficult for the preprocessors, which limits the possibilities of analysis.", "For example, Sasano and Okumura", "(2016)", "focused only on simple examples where case markers appear explicitly, and only extract the head noun of the argument to avoid preprocessor errors.", "Thus, they could not analyze the phenomena in which the above conditions were not met.", "The above issue becomes more serious in low-resource languages, where the necessary preprocessors are often unavailable.", "In this count-based direction, Bloem", "(2016)", "used n-gram LMs to test the claims on the German two-verb clusters.", "This method is closest to our proposed approach, but the general validity of using LMs is out of focus.", "This LM-based method also relies on the assumption of the parallels between the canonical word order and the frequency.", "Another common data-driven approach is to train an interpretable model", "(e.g., Bayesian linear mixed models)", "to predict the targeted linguistic phenomena and analyze the inner workings of the model", "(e.g., slope parameters)", "(Bresnan et al., 2007; Asahara et al., 2018).", "Through this approach, researchers can obtain richer statistics, such as the strength of each factor's effect on the targeted phenomena, but creating labeled data and designing features for supervised learning can be costly.", "In the NLP field, LMs are widely used to estimate the acceptability of text", "(Olteanu et al., 2006; Kann et al., 2018).", "An overview of the LM-based method is shown in Figure 1. After preparing several word orders considering the targeted linguistic hypothesis, we compare their generation probabilities in LMs.", "We assume that the word order with the highest generation probability follows their canonical word order.", "In the count-based methods mentioned in Section 2.2, researchers often require preprocessors to identify the occurrence of the phenomena of interest in a large corpus.", "On the other hand, researchers need to prepare data to be scored by LMs to evaluate hypothesis in the LM-based method.", "Whether it is easier to prepare the preprocessor or the evaluation data depends on the situation.", "For example, the data preparation is easier in the situation where one wants to analyze the word order trends when a specific postpositional particle is omitted.", "The question is whether Japanese speakers prefer the word order like in Example", "(3)-a or", "(3)-b.", "1", "While identifying the cases", "( ACC in Example", "(3))", "without their postpositional particle is difficult, creating the data without a specific postpositional particle by modifying the existing data is easier such as creating Example", "(4)-b from Example", "(4)-a.", "The human-based method is more reliable given an example.", "However, it can be prohibitively costly.", "While the human-based method requires an evaluation data and human subjects, the LM-based method only requires the evaluation data.", "Thus, the LM-based method can be more suitable for estimating the validity of hypotheses and considering 1 Omitted characters are crossed out.", "(e.g., )", "many examples as exhaustively as possible.", "In addition, the LM-based method can be replicable.", "The suitable approach can be different in a situation, and broadening the choice of alternative methodologies may be beneficial to linguistic research.", "Nowadays, various useful frameworks, language resources, and machine resources required to train LMs are available, 2 which support the ease of implementing the LM-based method.", "Moreover, we make the LMs used in this study available.", "3 3.3 Strategies to validate the use of LMs to analyze the word order The goal of this study is to validate the use of LMs for analyzing the canonical word order.", "The canonical word order itself is still a subject of research, and the community does not know all about it.", "Thus, it is ultimately impossible to enumerate the requirements on what LMs should know about the canonical word order and probe the knowledge of LMs.", "Instead, we demonstrate the validity of the LM-based method by showcasing two types of parallels:", "(i)", "word order preference of LMs showing parallels with that of humans, and", "(ii)", "the results obtained with the LM-based method and those with previous methods being consistent on various claims on canonical word order.", "If the results of LMs are consistent with those of existing methods, the possibility that LMs and existing methods have the same ability to evaluate the hypotheses is supported.", "If the LM-based method is assumed to be valid, the method has the potential to streamline the research on unevaluated claims on word order.", "In the experiment sections, we examine the properties of Japanese LMs on", "(i)", "and", "(ii).", "Even if LMs satisfy the criteria described in 3.3, there is no exact guarantee that LM scores will re-flect the effectiveness of human processing of specific constructions in general.", "Thus, there seems to be a danger of confusing LM artifacts with language facts.", "Based on this, we hope that researchers use LMs as a tool just to limit the hypothesis space.", "LM supported hypotheses should then be re-verified with a human-based approach.", "Furthermore, since there is a lot of hypotheses and corresponding research, we cannot check all the properties of LMs in this study.", "This study focuses on intra-sentential factors of Japanese case order, and it is still unclear whether the LM-based method works properly in linguistic phenomena which are far from being the focus of this study.", "This is the first study where evidence is collected on the validity of using LMs for word order analysis and encourages further research on collecting such evidence and examining under what conditions this validity is guaranteed.", "We used auto-regressive, unidirectional LMs with Transformer", "(Vaswani et al., 2017).", "We used two variants of LMs, a character-based LM", "(CLM)", "and a subword-based LM", "(SLM).", "In training SLM, the input sentences are once divided into morphemes by MeCab", "(Kudo, 2006)", "with a UniDic dictionary, 4 and then these morphemes are split into subword units by byte-pair-encoding.", "(Sennrich et al., 2016)", "5 .", "160M sentences 6 randomly selected from 3B web pages were used to train the LMs.", "Hyperparameters are shown in Appendix A. Given a sentence s , we calculate its generation probability p", "( s )", "= p", "( s )", "p", "( s )", ", where p", "( )", "and p", "( )", "are generation probabilities calculated by a left-to-right LM and a right-to-left LM, respectively.", "Depending on the hypothesis, we compare the generation probabilities of various variants of s with different word orders.", "We assume that the word order with the highest generation probability follows their canonical word order.", "To examine the validity of using LMs for canonical word order analysis, we examined the parallels between the LMs and humans on the task determining the canonicality of the word order", "(Figure 2).", "First, we created data for this task", "(Section 4.1).", "We then compared the word order preference of LMs and that of humans", "(Section 4.2).", "4 https://unidic.ninjal.ac.jp/ 5 Implemented in sentencepiece", "(Kudo and Richardson, 2018)", "We set character coverage to 0.9995 and vocab size to 100,000.", "6 14GB in UTF-8 encoding.", "For reference, Japanese Wikipedia has around 2.5 GB of text.", "Because the focus of this study has context-independent nature, the sentences order is shuffled to prevent learning the inter-sentential characteristics of the language.", "Data We randomly collected 10k sentences from 3B web pages, which are not overlapped with the LM training data.", "To remove overly complex sentences, we extracted sentences that must:", "(i)", "have less than or equal to five clauses and one verb,", "(ii)", "have clauses with a sibling relationship in its dependency tree, and they accompany a particle or adverb,", "(iii)", "not have special symbols such as parentheses, and", "(iv)", "not have a backward dependency path.", "For each sentence, we created its scrambled version.", "7 The scrambling process is as follows: 1. Identify the dependency structure by using JUMAN 8 and KNP 9 .", "2. Randomly select a clause with several children.", "3. Shuffle the position of its children along with their descendants.", "Annotation We used the crowdsourcing platform Yahoo Japan!", "10 .", "For our task, we showed crowdworkers a pair of sentences", "(order 1 , order 2 ), where one sentence has the original word order, and the other sentence has a scrambled word order.", "11 Each annotator was instructed to label the pair with one of the following choices:", "(1)", "order 1 is better,", "(2)", "order 2 is better, or", "(3)", "the pair contains a semantically broken sentence.", "Only the sentences", "(order 1 , order 2 )", "were shown to the annotators, and they were instructed not to imagine a specific context for the sentences.", "We filtered unmotivated workers by using check questions.", "12 For each pair 7 When several scrambled versions were possible for a given sentence, we randomly selected one of them.", "8", "http://nlp.ist.i.kyoto-u.ac.jp/EN/", "index.php?JUMAN 9", "http://nlp.ist.i.kyoto-u.ac.jp/EN/", "index.php?KNP 10 https://crowdsourcing.yahoo.co.jp/ 11 Crowdworkers did not know which sentence was the original sentence.", "12 We manually created check questions considering the Japanese speakers' preference in trial experiments in advance.", "instance, we employed 10 crowdworkers.", "In total, 756 unique, motivated crowdworkers participated in our task.", "From the annotated data, we collected only the pairs satisfying the following conditions for our experiments:", "(i)", "none of 10 annotators determined that the pair contains a semantically broken sentence, and", "(ii)", "nine or more annotators preferred the same order.", "The majority decision is labeled in each pair; the task is binary classification.", "We assume that if many workers prefer a certain word order, then it follows its canonical word order, and the other one deviates from it.", "We collected 2.6k pair instances of sentences.", "We compared the word order preference of LMs and that of the workers by using the 2.6K pairs created in Section 4.1.", "We calculated the correlation of the decisions between the LMs and the workers; which word order is more appropriate order 1 or order 2 .", "The word orders supported by CLM and SLM are highly correlated with workers, with the Pearson correlation coefficient of 0.89 and 0.90, respectively.", "This supports the assumption that the generation probability of LMs can determine the canonical word order as accurately as humans do.", "Note that such a direct comparison of word order is difficult with the count-based methods because of the sparsity of the corpus.", "This section examines whether LMs show word order preference consistent with previous linguistic studies.", "The results are entirely consistent, which support the validity of the LM-based methods in Japanese.", "Each subsection focuses on a specific component of Japanese sentences.", "The order of double objects is one of the most controversial topics in Japanese word order.", "Examples of the possible order are as follows:", "Henceforth, DAT-ACC / ACC-DAT denotes the word order in which the DAT / ACC argument precedes the ACC / DAT argument.", "We evaluate the 0 0.5 1 0 0.5 1 ACC-DAT r a t e", "(c)", "Relationship between the degree of co-occurrence of verb and arguments, and the ACC-DAT rate in each example.", "For the results of LMs, the ACC-DAT rate of each example is regarded as 1 if LMs prefer ACC-DAT order, otherwise we regard the example as 0. Figure 3: Overlap of the results of Sasano and Okumura", "claims Sasano and Okumura", "(2016)", "focused on with the data they collected.", "13 Word order for each verb First, we analyzed the trend of the double object order for each verb.", "We analyzed 620 verbs following Sasano and Okumura", "(2016).", "14 For each set of examples S v corresponding to a verb v , we:", "(i)", "created an instance with the swapped order of ACC and DAT for each example, and", "(ii)", "compared the generation probabilities of the original and swapped instance.", "S v is the set of examples preferred by LMs.", "R v ACC-DAT is calculated as follows: R v ACC-DAT = N v ACC-DATN v ACC-DAT + N v DAT-ACC , where N v ACC-DAT / N v DAT-ACC is the number of examples with the ACC-DAT / DAT-ACC order in S v .", "Figure", "3-(a)", "shows the relationship between R v ACC-DAT determined by LMs and one reported in a 13 We filtered the examples overlapping with the training data of LMs in advance.", "previous count-based study", "(Sasano and Okumura, 2016).", "These results strongly correlate with the Pearson correlation coefficient of 0.91 and 0.88, in CLM and SLM, respectively.", "In addition, canon-ical word order is DAT-ACC", "(Hoji, 1985)", "is unlikely to be valid because there are verbs where R v ACC-DAT is very high", "(details in Appendix B.1).", "This conclusion is consistent with Sasano and Okumura", "(2016).", "Word order and verb types In Japanese, there are show-type and pass-type verbs", "(details in Appendix B.2).", "Matsuoka", "(2003)", "claimed that the order of double objects differs depending on these verb types.", "Following Sasano and Okumura", "(2016), we analyzed this trends.", "We applied the Wilcoxon rank-sum test between the distributions of R v ACC-DAT determined by LMs in the two groups", "(show-type and pass-type verbs).", "The results show no significant difference between the two groups", "(p-value is 0.17 and 0.12 in the experiments using CLM and SLM, respectively).", "These results are consistent with the count-based", "(Sasano and Okumura, 2016)", "and the human-based", "(Miyamoto, 2002; Koizumi and Tamaoka, 2004)", "methods.", "Word order and argument omission Sasano and Okumura", "(2016)", "claimed that the frequently omitted case is placed near the verb.", "First, we calculated R v DAT -only for each verb v as follows: R v DAT -only = N v DAT -only N v DAT -only + N v ACC -only , where N v DAT -only / N v ACC -only denotes the number of examples in which the DAT / ACC case appears, and the other case does not in S v .", "A large R v DAT -only score indicates that the DAT argument is less frequently omitted than the ACC argument in S v .", "We analyzed the relationship between R v DAT -only and R v ACC-DAT for each verb.", "Figure", "3-(b)", "shows that the regression lines from the LM-based method and Sasano and Okumura", "(2016)", "corroborate similar trends.", "The Pearson correlation coefficient between R v DAT -only and R v ACC-DAT is 0.404 for CLM and 0.374 for SLM.", "The results are consistent with Sasano and Okumura", "(2016), where they reported that the correlation coefficient was 0.391.", "Word order and semantic role of the dative argument Matsuoka", "(2003)", "claimed that the canonical word order differs depending on the semantic role of the dative argument.", "Sasano and Okumura TIM < LOC TIM < NOM LOC < NOMCLM .757 .642 .604 SLM .708 .632 .615 Count .686 .666 .681 Table 2: The columns a < b show the score o", "Type-A has an inanimate goal", "( school )", "as the DAT argument, while Type-B has an animate processor", "( teacher ).", "It was reported that Type-A is likely to be the ACC-DAT order, while Type-B is likely to be the DAT-ACC order.", "Following Sasano and Okumura", "(2016), we analyzed 113 verbs.", "15 For each verb, we compared the ACC-DAT rate in its type-A examples and the rate in its type-B examples.", "The number of verbs where the ACC-DAT order is preferred in Type-A examples to Type-B examples is significantly larger", "(a two-sided sign test p < 0.05).", "This result is consistent with that of Sasano and Okumura", "(2016); Matsuoka", "(2003)", "and implies that the LMs capture the animacy of the nouns.", "Details are in Appendix B.3.", "Word order and co-occurrence of verb and arguments Sasano and Okumura", "(2016)", "claimed that an argument that frequently co-occurs with the verb tends to be placed near the verb.", "For each example, the LMs determine which word order", "( DAT-ACC or ACC-DAT )", "is appropriate.", "Each example also has a score NPMI", "(definition in Appendix B.4).", "Higher NPMI means that the DAT noun in the example more strongly co-occurs with the verb in the example than the ACC noun.", "Figure", "3-(c)", "shows the relationship between NPMI and the ACC-DAT rate in each example.", "NPMI and the ACC-DAT rate are correlated with the Pearson correlation coefficient of 0.517 and 0.521 in CLM and SLM, respectively.", "These results are consistent with Sasano and Okumura", "(2016).", "15 Among the 126 verbs used in Sasano and Okumura", "(2016), 113 verbs with data that do not overlap with the LM training data were selected.", "Our focus moves to the cases closer to the beginning of the sentences.", "The following claim is a well-known property of Japanese word order: The case representing time information", "( TIM )", "is placed before the case representing location information", "( LOC ), and the TIM and LOC cases are placed before the NOM case", "(Saeki, 1960, 1998).", "We examined a parallel between the result obtained with the LM-based and count-based methods on this claim.", "We randomly collected 81k examples from 3B web pages.", "16 To create the examples, we identified the case components by KNP, and the TIM and LOC cases were categorized with JUMAN", "(details in Appendix C).", "For each example s , we created all possible word orders and obtained the word order with the highest generation probability", "( s ).", "Given S a set of s , we calculated a score o", "( a < b )", "for cases a and b as follows: o", "where N k<l is the number of examples where the case k precedes the case l in S .", "Higher o", "( a < b )", "indicates that the case a is more likely to be placed before the case b .", "The results with the LM-based methods and the count-based method are consistent", "(Table 2).", "Both results show that o", "( TIM < LOC )", "is significantly larger than o", "( TIM > LOC )", "( p < 0 . 05 with a two-sided signed test), which indicates that the TIM case usually precedes the LOC case.", "Similarly, the results indicate that the TIM case and the LOC case precedes the NOM case.", "We checked the preference of the adverb position in LMs.", "The position of the adverb has no restriction except that it must be before the verb, which is similar to the trend of the case position.", "However, Koizumi and Tamaoka", "(2006)", "claimed that There is a canonical position of an adverb depend-16 Without overlap with the training data of LMs. Model long precedes short short precedes long CLM 5,640 3,754 SLM 5,757 3,914 Table 4: Changes in the position of a constituent with the largest number of chunks. ing on its type.", "adverbs: MODAL , TIME , MANNER , and RESULTIVE .", "We used the same examples as Koizumi and Tamaoka", "(2006).", "For each example s , we created its three variants with a different adverb position as follows", "( A friend handled the tools roughly. ):", "(10)", "ASOV: roughly friendNOM toolsACC handled.", "SAOV: friendNOM roughly toolsACC handled.", "SOAV: friendNOM toolsACC roughly handled.", "where the sequence of the alphabet such as ASOV denote the word order of its corresponding sentences.", "For example, ASOV indicates the order: adverb < subject < object < verb.", "A, S, O, and V denote adverb, subject, object, and verb, respectively.", "Then, we obtained the preferred adverb position by comparing their generation probabilities.", "Finally, for each adverb type and its examples, we ranked the preference of the possible adverb positions: ASOV, SAOV, and SOAV.", "Table 3 shows the rank correlation of the preference of the position of each adverb type.", "The results show similar trends of LMs with that of the human-based method", "(Koizumi and Tamaoka, 2006).", "The effects of long-before-short, the trend that a long constituent precedes a short one, has been reported in several studies", "(Asahara et al., 2018; Orita, 2017)", "", "We checked whether this effect can be captured with the LM-based method.", "Among the examples used in Section 5.2, we analyzed about 9.5k examples in which the position of the constituent with the largest number of chunks 17 differed between its canonical case order 18 and the order supported by LMs.", "Table 4 shows that there are significantly", "(p < 0.05 with a two-sided signed test)", "large numbers 17 chunks were identified by KNP.", "of examples where the longest constituent moves closer to the beginning of the sentence.", "This result is consistent with existing studies and supports the tendency for longer constituents to appear before shorter ones.", "We found parallels between the results with the LM-based method and that with the previously established method on various properties of canonical word order.", "These results support the use of LMs for analyzing Japanese canonical word order.", "In the previous section, we tentatively concluded that LMs can be used for analyzing the intra-sentential properties on the canonical word order.", "Based on this finding, in this section, we demonstrate the analysis of additional claims on the properties of the canonical word order with the LM-based method, which has been less explored by large-scale experiments.", "This section shows the analysis of the relationship between topicalization and the canonical word order.", "Additional analyses on the effect of various adverbial particles for the word order are shown in Appendix F. 6.1 Topicalization in Japanese The adverbial particle", "( TOP )", "is usually used as a postpositional particle when a specific constituent represents the topic of the sentence", "(Hey-cock, 1993; Noda, 1996; Fry, 2003).", "When a case component is topicalized, the constituent moves to the beginning of the sentence, and the particle", "( TOP )", "is added", "(Noda, 1996).", "Additionally, the original case particle is sometimes omitted, 19 which makes the case of the constituent difficult to identify.", "For example, to topicalize", "( book ACC )", "in Example", "(8)-a, the constituent moves to the beginning of the sentence, and the original accusative case particle", "( ACC )", "is omitted.", "Similarly,", "( teacher NOM )", "is topicalized in Example", "(8)-b.", "The original sentence is enclosed in the square brackets in Example", "(8).", "19 The particles", "( ACC )", "and", "( NOM )", "are omitted.", "With the above process, we can easily create a sentence with a topicalized constituent.", "On the other hand, identifying the original case of the topicalized case components is error-prone.", "Thus, the LM-based method can be suitable for empirically evaluating the claims related to the topicalization.", "By using the LM-based method, we evaluate the following two claims:", "(i)", "The more anterior the case is in the canonical word order, the more likely its component is topicalized", "(Noda, 1996).", "(ii)", "The more the verb prefers the ACC-DAT order, the more likely the ACC case is topicalized than the DAT case.", "The claim", "(i)", "suggests that, for example, the NOM case is more likely to be topicalized than the ACC case because the NOM case is before the ACC case in the canonical word order of Japanese.", "The claim", "(ii)", "is based on our observation.", "It can be regarded as an extension of the claim", "(i)", "considering the effect of the verb on its argument order.", "We assume that the canonical word order of Japanese is TIM < LOC < NOM < DAT < ACC in this section.", "Claim", "(i)", "We examine which case is more likely to be topicalized.", "We collected 81k examples from Japanese Wikipedia", "(Details are in Appendix C).", "For each example, a set of candidates was created by topicalizing each case, as shown in Example", "(8).", "Then, we selected the sentences with the highest score by LMs in each candidate set.", "We denote the obtained sentences as S topic .", "We calculated a score t a | b for pairs of cases a and b .", "where N a | b is the examples where the case a and b appear, and case a is a topic of the sentence in S topic .", "The higher the score is, the more the case a is likely to be topicalized than the case b is.", "We compared t a | b and t b | a among the pairs of cases a and b , where the case a precedes the case b in the canonical word order.", "Through our experiments, t a | b was significantly larger than t b | a", "( p < 0 . 05 with a paired t-test)", "in CLM and SLM results, which supports the claim", "(i)", "(Noda, 1996).", "Detailed results are shown in Appendix E. Claim", "(ii)", "The canonical word order of double objects is different for each verb", "(Section 5.1).", "Based on this assumption and the claim", "(i), we hypothesized that the more the verb prefers the ACC-DAT order, the more likely the ACC case of the verb is topicalized than the DAT case.", "We used the same data as in Section 5.1.", "For each example, we created two sentences by topicalizing the ACC or DAT argument.", "Then we compared their generation probabilities.", "In each set of examples corresponding to a verb v , we calculated the rate that the sentence with the topicalized ACC argument is preferred rather than that with the topicalized DAT argument.", "This rate and R v ACC-DAT is significantly correlated with the Pearson correlation coefficient of 0.89 and 0.84 in CLM and SLM, respectively.", "This results support the claim", "(ii).", "Detailed results are shown in Appendix E. 7 Conclusion and Future work We have proposed to use LMs as a tool for analyzing word order in Japanese.", "Our experimental results support the validity of using Japanese LMs for canonical word order analysis, which has the potential to broaden the possibilities of linguistic research.", "From an engineering view, this study supports the use of LMs for scoring Japanese word order automatically.", "From the viewpoint of the linguistic field, we provide additional empirical evidence to various word order hypotheses as well as demonstrate the validity of the LM-based method.", "We plan to further explore the capability of LMs on other linguistic phenomena related to word order, such as given new ordering", "(Nak-agawa, 2016; Asahara et al., 2018).", "Since LMs are language-agnostic, analyzing word order in another language with the LM-based method would also be an interesting direction to investigate.", "Furthermore, we would like to extend a comparison between machine and human language processing beyond the perspective of word order.", "We would like to offer our gratitude to Kaori Uchiyama for taking the time to discuss our paper and Ana Brassard for her sharp feedback on English.", "We also would like to show our appreciation to the Tohoku NLP lab members for their valuable advice.", "We are particularly grateful to Ryohei Sasano for sharing the data for double objects order analyses.", "This work was supported by JSTCREST Grant Number JPMJCR1513, JSPS KAK-ENHI Grant Number JP19H04162, and Grant-in-Aid for JSPS Fellows Grant Number JP20J22697." ]
[ "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "abstain", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "other", "other", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other" ]
[ "In dialogue state tracking, dialogue history is a crucial material, and its utilization varies between different models.", "However, no matter how the dialogue history is used, each existing model uses its own consistent dialogue history during the entire state tracking process, regardless of which slot is updated.", "Apparently, it requires different dialogue history to update different slots in different turns.", "Therefore, using consistent dialogue contents may lead to insufficient or redundant information for different slots, which affects the overall performance.", "To address this problem, we devise DiCoS-DST to dynamically select the relevant dialogue contents corresponding to each slot for state updating.", "Specifically, it first retrieves turn-level utterances of dialogue history and evaluates their relevance to the slot from a combination of three perspectives: (1) its explicit connection to the slot name; (2) its relevance to the current turn dialogue; (3) Implicit Mention Oriented Reasoning.", "Then these perspectives are combined to yield a decision, and only the selected dialogue contents are fed into State Generator, which explicitly minimizes the distracting information passed to the downstream state prediction.", "Experimental results show that our approach achieves new state-of-the-art performance on MultiWOZ 2.1 and MultiWOZ 2.2, and achieves superior performance on multiple mainstream benchmark datasets (includ-ing Sim-M, Sim-R, and DSTC2).", "1 1 Introduction Task-oriented dialogue systems have recently attracted growing attention and achieved substantial progress.", "Dialogue state tracking (DST) is a core component, where it is responsible for interpreting user goals and intents and feeding Corresponding author.", "1 Code is available at https://github.com/guojinyu88/DiCoS-master S1 :Good morning!", "How can I help you?", "U1 :I'm looking to stay at a guesthouse while I'm in town.", "S2 :The alpha-milton guest house is in the north and moderately priced.", "Would you like to book a stay?", "U2 :I need something cheaply priced.", "I don't need internet access, so don't worry about that.", "S3 :We have 9 guesthouses that match your search.", "Would you like to narrow it down?", "S4 :May i suggest the Worth House?", "It is a cheap, 4 star hotel in northern Cambridge.", "U4 : How long can I book it from Monday for 7 people?", "U3 :I don't care as long as it's a guesthouse located in the north for cheap.", "S5 : You can book it for 3 nights.", "S6 :Your booking for 3 nights was a success!", "Is there anything else I can help you with?", "U6 :Thank you.", "Are there any cheap restaurants near the hotel as well?", "U5 : That's fine.", "Please book it for me.", "S7 :Just to clarify, are you looking for a cheap restaurant in the north area of town?", "S8 :I'm sorry, there are no Swiss restaurants in the north side of town.", "Is there a different food choice you would like to try?", "U8 :I see.", "Hmm.", "What about Indian?", "U7 :Yes.", "This restaurant should serve swiss food too.", "S9 :How about the Royal Spice, it's a cheap Indian place in the north part of town.", "S10 :No problem, address is Victoria Avenue Chesterton, postcode cb41eh.", "U10 :Thank you.", "I would also like to book a taxi to get from the guesthouse to the restaurant.", "U9 :Thank you, please provide the address and the postcode.", "hotel-type: ['guesthouse'] hotel-pricerange: ['cheap'] hotel-type: ['guesthouse'] hotel-name: ['worth house']hotel-bookday: ['monday'] hotel-bookpeople: ['7'] hotel-bookstay: ['3'] ... hotel-name: ['worth house'] restaurant-name: ['royal spice'] ... ... taxi-departure: ['worth house'] taxi-destination: ['royal spice'] Figure 1: An example of multi-domain dialogues.", "downstream policy learning in dialogue management.", "The common practice treats it as a problem of compacting the dialogue content into a series of slot-value pairs that represent information about the user goals updated until the current turn.", "For example, in Figure 1, the dialogue state at turn 2 is {( hotel type , guesthouse ), ( hotel pricerange , cheap )}.", "In dialogue state tracking, dialogue history is a crucial source material.", "Recently, granularity has been proposed to quantify the utilization of dialogue history(Yang et al., 2021).", "In DST, the definition of granularity is the number of dialogue turns spanning from a certain dialogue state in the dialogue to the current dialogue state.", "Traditional DST models usually determine dialogue 2320 states by considering only utterances at the current turn (i.e., granularity = 1 ), while recent researches attempt to utilize partial history (i.e., granularity = k, k < T ) or introduce all dialogue history information into the prediction (i.e., granularity = T ).", "However, no matter what granularity is used, we find that each model uses a constant granularity it determines, regardless of which slot is being updated.", "Apparently, it requires different granularity for different slots in different turns.", "For example, in Figure 1, the granularity required for slot hotel name , hotel bookday , and hotel bookpeople in turn 5 is 2, while slot hotel bookstay in turn 5 requires a granularity of", "1. Therefore, using a constant granularity may lead to insufficient input for updating some slots, while for others, redundant while confusing contents can become distracting information to pose a hindrance, which affects the overall performance.", "Furtherly, granularity means directly working on all dialogue contents from a particular turn to the current turn, regardless of the fact that there are still dialogue contents that are not relevant to the slot.", "Therefore, if it is possible to break the limitation of granularity and to dynamically select relevant dialogue contents corresponding to each slot, the selected dialogue contents as input will explicitly minimize distracting information being passed to the downstream state prediction.", "To achieve this goal, we propose a DiCoS-DST to fully exploit the utterances and elaborately select the relevant dialogue contents corresponding to each slot for state updating.", "Specifically, we retrieve turn-level utterances of dialogue history and evaluate their relevance to the slot from a combination of three perspectives.", "First, we devise an SN-DH module to touch on the relation of the dialogue and the slot name, which straightforward reflects the relevance.", "Second, we propose a CT-DH module to explore the dependency between each turn in the dialogue history and the current turn dialogue.", "The intuition behind this design is that the current turn dialogue is crucial.", "If any previous turn is strongly related to the current turn dialogue, it can be considered useful as dependency information for slot updating.", "Third, we propose an Implicit Mention Oriented Reasoning module to tackle the implicit mention (i.e., coreferences) problem that commonly exists in complex dialogues.", "Specifically, we build a novel graph neural network (GNN) to explicitly facilitate reasoning over the turns of dialogue and all slot-value pairs for better exploitation of the coreferential relation information.", "After the evaluation of these three modules, we leverage a gate mechanism to combine these perspectives and yield a decision.", "Finally, the selected dialogue contents are fed into State Generator to enhance their interaction, form a new contextualized sequence representation, and generate a value using a hybrid method.", "We evaluate the effectiveness of our model on most mainstream benchmark datasets on task-oriented dialogue.", "Experimental results show that our proposed DiCoS-DST achieves new state-of-the-art performance on both two versions of the most actively studied dataset: MultiWOZ 2.1 (Eric et al., 2019) and MultiWOZ 2.2 (Zang et al., 2020) with joint goal accuracy of 61.02% and 61.13%.", "In particular, the joint goal accuracy on MultiWOZ 2.2 outperforms the previous state-of-the-art by 3.09%.", "In addition, DiCoS-DST also achieves new state-of-the-art performance on Sim-M and Sim-R (Shah et al., 2018) and competitive performance on DSTC2 (Henderson et al., 2014).", "Our contributions in this work are three folds: We propose a Multi-Perspective Dialogue Collaborative Selector module to dynamically select relevant dialogue contents corresponding to each slot from a combination of three perspectives.", "This module can explicitly filter the distracting information being passed to the downstream state prediction.", "We propose Implicit Mention Oriented Reasoning and implement it by building a GNN to explicitly facilitate reasoning and exploit the coreferential relation information in complex dialogues.", "Our DiCoS-DST model achieves new state-of-the-art performance on the MultiWOZ 2.1, MultiWOZ 2.2, Sim-M, and Sim-R datasets.", "There has been a plethora of research on dialogue state tracking.", "Traditional dialogue state trackers relied on a separate Spoken Language Understanding (SLU) module (Thomson and Young, 2010; Wang and Lemon, 2013) to extract relevant information.", "In recent years, neural network models are proposed for further improvements.", "One way to classify DST models is whether they use dialogue 2321 Pre-trained Language Model Pre-trained Language Model Encoder [CLS] t B T-1 [SEP] D t [SEP] [CLS] t B T-1 [SEP] D t [SEP] State Update Predictor S j U s Multi-Perspective Dialogue Collaborative Selector SN-DH h SN-DH h CT-DH h IMORCT-DH Implicit Mention Oriented Reasoning h SUM h SUM 1 h SUM 2 h SUM T-1 Set of selected top k dialogue turns ... ... ... ... ...", "history.", "Some DST models obtain each slot value in the dialogue state by inquiring about a part or all of the dialogue history (Xu and Hu, 2018; Lei et al., 2018; Goel et al., 2019; Ren et al., 2019; Shan et al., 2020; Zhang et al., 2020; Chen et al., 2020; Guo et al., 2021), while the others use the current turn dialogue to predict the dialogue state (Mrkic et al., 2017; Kim et al., 2020; Heck et al., 2020; Zhu et al., 2020).", "Recently, (Yang et al., 2021) first proposed the granularity in DST to quantify the use of dialogue history.", "Its experimental results show that different models on different datasets have different optimal granularity (not always using the entire dialogue history).", "However, no matter what granularity is used, each model uses a constant granularity it determines, regardless of which slot is updated.", "On the other hand, dialogue state tracking and machine reading comprehension (MRC) have similarities in many aspects (Gao et al., 2020).", "Recently, Multi-hop Reading Comprehension (MHRC) has been a challenging topic.", "For cases in MHRC datasets, one question is usually provided with several lexically related paragraphs, which contain many confusing contexts.", "To deal with this situation, cascaded models (Qiu et al., 2019; Groeneveld et al., 2020; Tu et al., 2020; Wu et al., 2021) that are composed of a reader and a retriever are often used.", "They retrieve the most relevant evidence paragraphs first and perform multi-hop reasoning on retrieved contexts thereafter.", "The mechanism of dialogue selection before state generation in our work is partially inspired by the paragraph retrieval in multi-hop reading comprehension.", "The architecture of DiCoS-DST is illustrated in Figure", "2. DiCoS-DST consists of Encoder, State Update Predictor, Multi-Perspective Dialogue Collaborative Selector, and State Generator.", "Here we first define the problem setting in our work.", "We define the number of the current turn as T .", "The task is to predict the dialogue state at each turn t ( t T ) , which is defined as B t = { ( S j , V jt ) | 1 j J } , where S j is the slot name, V jt is the corresponding slot value, and J is the total number of slots.", "For the sake of simplicity, we omit the superscript T in the variables in the next sections.", "We employ the representation of the previous turn dialogue state BT 1 concatenated to the representation of each turn dialogue utterances D t as input: E t = [CLS] t BT 1 [SEP] D t , (1 t T ) , where [CLS] t is a special token added in front of every turn input.", "The representation of the previous turn dialogue state is BT 1 = B 1 T 1 . . . BJ T 1 .", "The representation of each slot's state B jT 1 = [SLOT] jT 1 S j [VALUE] jT 1 V jT 1 , where [SLOT] jT 1 and [VALUE] jT 1 are special tokens that represent the slot name and the slot value 2322 at turn T 1 , respectively.", "We donate the representation of the dialogue at turn t as D t = R t ; U t [SEP] , where R t is the system response and U t is the user utterance.", "; is a special token used to mark the boundary between R t and U t , and [SEP] is a special token used to mark the end of a dialogue turn.", "Then a pre-trained language model (PrLM) will be adopted to obtain contextualized representation for the concatenated input sequence E t .", "We attach a two-way classification module to the top of the Encoder output.", "It predicts which slots require to be updated in the current turn.", "The subsequent modules will only process the selected slots, while the other slots will directly inherit the slot values from the previous turn.", "We inject this module because whether a slot requires to be updated indicates whether the current turn dialogue is significant for this slot.", "For CT-DH of the subsequent Multi-Perspective Collaborative Selector, the great importance of the current turn dialogue is a prerequisite.", "A more detailed explanation will be given in Section 3.3.", "We employ the same mechanism as (Guo et al., 2021) to train the module and to predict the state operation.", "We sketch the prediction process as follows: SUP( S j ) = (cid:26) update , if Total _ score j > inherit , otherwise (1) We define the set of the selected slot indices as U s = { j | SUP( S j ) = update } .", "For each slot S j ( j U s ) selected to be updated, SN-DH, CT-DH, and Implicit Mention Oriented Reasoning modules are proposed to evaluate dialogue relevance and aggregate representations from three perspectives.", "Then a gated fusion mechanism is implemented to perform the dialogue selection.", "SN-DH SN-DH (Slot Name Dialogue History) aims to explore the correlation between slot names and each turn of the dialogue history.", "For slot S j , the slot name is straightforward explicit information.", "Therefore, the correlation with the slot name directly reflects the importance of the dialogue turn.", "We take the slot name presentation [SLOT] jT 1 as the attention to the t -th turn dialogue representation D t .", "The output jt = softmax( D t ([SLOT] jT 1 ) ) represents the correlation between each position of D t and the j -th slot name at turn t .", "Then we get the aggregated dialogue representation h t SN DH = ( jt ) D t , which will participate in the subsequent fusion as the embedding of the t -th turn dialogue in this perspective.", "CT-DH As aforementioned, a slot that needs to be updated in the current turn means that the current turn dialogue is most relevant to this slot.", "In this case, if the dialogue content of any other turn contains the information that the current turn dialogue highly depends on, it can also be considered useful.", "Based on this consideration, we devise a CT-DH (Current Turn Dialogue History) module to explore this association.", "Specifically, we build a multi-head self-attention (MHSA) layer on top of the [CLS] tokens generated from different turns of dialogue to enhance inter-turn interaction.", "The MHSA layer is defined as: head i = Attention( QW Qi , KW Ki , V W Vi ) (2) Multihead = ( head i . . . head n ) WO (3) I = MHSA([CLS] 1 . . . [CLS] T ) (4) where Q , K , and V are linear projections from [CLS] embeddings of each turn of dialogue, representing attention queries, key and values.", "We then append an attention layer between the output representation of the current turn dialogue and each turn of dialogue history to capture interactions between them: t = Attention([CLS] t , [CLS] T ) (5) h t CT DH = t [CLS] T + [CLS] t (6) h t CT DH will participate in the subsequent fusion as an aggregated representation of the t -th dialogue in this perspective.", "Implicit Mention Oriented Reasoning Handling a complex dialogue usually requires addressing implicit mentions (i.e., coreferences).", "As shown in Figure 1, in turn 10, the restaurant is not referred to explicitly upon ordering a taxi within the same dialogue turn.", "Instead, it is present in the value of another slot.", "Therefore, SN-DH and CT-DH are difficult to deal with this case due to their mechanisms.", "To tackle this problem, we build a graph neural network (GNN) model to explicitly facilitate reasoning over the turns of dialogue and all slot-value pairs for better exploitation of the 2323 coreferential relation.", "As illustrated in Figure 3, the nodes in the graph include two types: ND for each turn dialogue and NS V for each slot-value pair.", "They are initialized with the MHSA output representation [CLS] t and WS V ([SLOT] zT 1 [VALUE] zT 1 ) (1 z J ) , respectively.", "Then we design four types of edges to build the connections among graph nodes: 1) Add an edge between N jS V and NTD (red line in Figure 3).", "As aforementioned, the slot S j will be updated.", "This edge is to establish the connection between the slot to be updated and the current turn dialogue; 2) Add an edge between N jS V and N zS V ( z = j ) (blue line in Figure 3).", "These edges are to establish connections between the slot to be updated and other slots; 3) Add an edge between N zS V ( z = j ) and N t z D .", "t z is the turn when the most up-to-date value of S z is updated (green line in Figure 3).", "These edges are to establish connections between each slot and the turn of dialogue in which its latest slot value was updated; 4) Add an edge between N z 1 S V and N z 2 S V ( S z 1 and S z 2 belong to the same domain) (yellow line in Figure 3).", "These edges are to establish connections between slots that belong to the same domain.", "The motivation for this design is that we first explore the relation between the slot to be updated and other slot-value pairs based on the current turn dialogue.", "Then we use other slot-value pairs as media to establish relations to their corresponding dialogue turns.", "We add the fourth type of edges to represent the auxiliary relationship of slots that belong to the same domain.", "We use multi-relational GCN with gating mechanism as in (De Cao et al., 2019; Tu et al., 2019).", "We define h 0 i represents initial node embedding from ND or NS V .", "The calculation of node embedding after one hop can be formulated as: h l +1 i = ( u li ) g li + h li (1 g li ) (7) u li = f s ( h li ) + (cid:88) r R 1 |N ri | (cid:88) n N ri f r ( h ln ) (8) g li = sigmoid( f g ([ u li ; h li ])) (9) N ri is the neighbors of node i with edge type r , R is the set of all edge types, and h ln is the node representation of node n in layer l .", "| | indicates the size of the neighboring set.", "Each of f r , f s , f g can be implemented with an MLP.", "Gate control g li is a h IMOR t NDTND t NS-V j Dialogue Node Slot-Value Pair Node L hops Figure 3: Diagram of the graph neural network.", "vector consisting of values between 0 and 1 to control the amount information from computed update u li or from the original h li .", "Function denotes a non-linear activation function.", "After the message passes on the graph with L hops, we take the final representation of the t -th turn dialogue node N tD as the aggregated representation h t IMOR in this perspective.", "Gating Fusion and Collaborative Selection The representations h t SN DH , h t CT DH , and h t IMOR of the t -th turn dialogue enter this module for fusion and ranking.", "To balance the information from multiple perspectives, we leverage a gate mechanism to compute a weight to decide how much information from each perspective should be combined.", "It is defined as follows: 1 = 1 ( W 1 tanh( W 1 h t SN DH )) (10) 2 = 2 ( W 2 tanh( W 2 h t CT DH )) (11) 3 = 3 ( W 3 tanh( W 3 h t IMOR )) (12) h t sum = 1 h t SN DH + 2 h t CT DH + 3 h t IMOR (13) After the fusion, an MLP layer is followed, and then we take the dialogues of the top k ranked turns as the selected dialogue contents.", "It is worth mentioning that, unlike the state update predictor, since there is no ground-truth label of the dialogue turns that should be selected corresponding to each slot, we take this module and the following state generator as a whole and 2324 train it under the supervision of the final dialogue state label.", "We mark each selected dialogue turn to make the gradient of the state generator losses only backpropagate to the marked turns to ensure the effectiveness of supervision.", "The selected dialogue content will be utilized jointly update the dialogue state.", "Cascaded Context Refinement After acquiring a nearly noise-free set UD of selected dialogue turns, we consider that directly using their representations as inputs may ignore the cross attention between them since they are used as a whole.", "As a result, we concatenate these dialogue utterances together to form a new input sequence C = [CLS] BT 1 t 1 D 1 . . . t T _ S DT _ S t T DT ( T _ S = | UD | ) .", "Especially, we inject an indicator token t before each turn of dialogue utterance to get aggregated turn embeddings for the subsequent classification-based state prediction.", "Then we feed this sequence into a single PrLM to obtain the contextualized output representation.", "Slot Value Generation We first attempt to obtain the value using the extractive method from representation CE = D 1 D 2 . . . DT _ S DT : p = softmax( W s CE ([SLOT] jT 1 ) ) (14) q = softmax( W e CE ([SLOT] jT 1 ) ) (15) The position of the maximum value in p and q will be the start and end predictions of the slot value.", "If this prediction does not belong to the candidate value set of S j , we use the representation of CC = t 1 t 2 . . . t T _ S t T to get the distribution and choose the candidate slot value corresponding to the maximum value: y = softmax( WCCC ([SLOT] jT 1 ) ) (16) We define the training objectives of two methods as cross-entropy loss: L ext = 1 | U s | | U s | (cid:88) j ( p log p + q log q ) (17) L cls = 1 | U s | | U s | (cid:88) j y log y (18) where p and q are the targets indicating the proportion of all possible start and end, and y is the target indicating the probability of candidate values.", "We conduct experiments on most of the mainstream benchmark datasets on task-oriented dialogue, including MultiWOZ 2.1, MultiWOZ 2.2, Sim-R, Sim-M, and DSTC2.", "MultiWOZ 2.1 and MultiWOZ 2.2 are two versions of a large-scale multi-domain task-oriented dialogue dataset.", "It is a fully-labeled collection of human-human written dialogues spanning over multiple domains and topics.", "Sim-M and Sim-R are multi-turn dialogue datasets in the movie and restaurant domains, respectively.", "DSTC2 is collected in the restaurant domain.", "We use joint goal accuracy and slot accuracy as evaluation metrics.", "Joint goal accuracy refers to the accuracy of the dialogue state in each turn.", "Slot accuracy only considers slot-level accuracy.", "We compare the performance of DiCoS-DST with the following baselines: TRADE encodes the dialogue and decodes the value using a copy-augmented decoder (Wu et al., 2019).", "BERT-DST generates language representations suitable for scalable DST (Chao and Lane, 2019).", "DST+LU presents an approach for multi-task learning of language understanding and DST (Rastogi et al., 2018).", "TripPy extracts values from the dialogue context by three copy mechanisms (Heck et al., 2020).", "DSS-DST consists of the slot selector based on the current turn dialogue, and the slot value generator based on the dialogue history (Guo et al., 2021).", "Seq2Seq-DU employs two BERT-based encoders to respectively encode the utterances and the descriptions of schemas (Feng et al., 2021).", "Pegasus-DST applies a span prediction-based pretraining objective designed for text summarization to DST (Zhao et al., 2021).", "DST-as-Prompting uses schema-driven prompting to provide task-aware history encoding (Lee et al., 2021).", "We employ a pre-trained ALBERT-large-uncased model (Lan et al., 2019) for the encoder.", "The hidden size of the encoder d is 1024.", "We use AdamW optimizer (Loshchilov and Hutter, 2018) and set the warmup proportion to 0.01 and L2 weight decay of 0.01.", "We set the peak learning rate of State Update Predictor the same as in DSS-DST and the peak learning rate of the other modules to 0.0001.", "We set the dropout (Srivastava et al., 2014) rate 2325 Model MultiWOZ 2.1 MultiWOZ 2.2 Sim-M Sim-R DSTC2 Joint Slot Joint Slot Joint Joint Joint TRADE 45.60 -45.40 --DST+LU --46.0 84.9 -BERT-DST --80.1 89.6 69.3 TripPy 55.29 --83.5 90.0 -Pegasus-DST 54.40 -57.60 --73.6 DST-as-Prompting 56.66 -57.60 -83.3 90.6 -Seq2seq-DU 56.10 -54.40 --85.0 DSS-DST 60.73 98.05 58.04 97.66 --DiCoS-DST ( k = 1 ) 60.89 ( 0.47) 98.05 ( 0.02) 61.04 ( 0.56) 98.05 ( 0.04) 84.5 ( 1.2) 91.2 ( 0.3) 77.7 ( 0.2) DiCoS-DST ( k = 2 ) 61.02 ( 0.41) 98.05 ( 0.02) 61.13 ( 0.54) 98.06 ( 0.03) 84.7 ( 1.1) 91.5 ( 0.3) 78.4 ( 0.2) DiCoS-DST ( k = 3 ) 60.85 ( 0.24) 98.05 ( 0.01) 60.88 ( 0.33) 98.05 ( 0.03) 83.8 ( 1.1) 91.0 ( 0.2) 77.3 ( 0.2) Table 1: Accuracy (%) on the test sets of benchmark datasets vs. various approaches as reported in the literature.", "to 0.1.", "We utilize word dropout (Bowman et al., 2016) with the probability of 0.1.", "We set L to", "3. The max sequence length for all inputs is fixed to 256.", "During training the Multi-Perspective Dialogue Collaborative Selector, we use the ground truth selected slots instead of the predicted ones.", "We report the mean joint goal accuracy over 10 different random seeds to reduce statistical errors.", "Table 1 shows the performance of our DiCoS-DST and other baselines.", "Our model achieves state-of-the-art performance on MultiWOZ 2.1 and MultiWOZ 2.2 with joint goal accuracy of 61.02% and 61.13%.", "In particular, the joint goal accuracy on MultiWOZ 2.2 outperforms the previous state-of-the-art by 3.09%.", "Besides, despite the sparsity of experimental results on Sim-M and Sim-R, our model still achieves state-of-the-art performance on these two datasets.", "On DSTC2, the performance of our model is also competitive.", "Among our models, DiCoS-DST ( k = 2 ) performs the best on all datasets.", "Especially, DiCoS-DST ( k = 2 ) and DiCoS-DST ( k = 1 ) perform better than DiCoS-Model MultiWOZ 2.2 DiCoS-DST 61.13 -State Update Predictor 58.48 (-2.65) -Multi-Perspective Dialogue Collaborative Selector 54.94 (-6.19) -Cascaded Context Refinement 59.75 (-1.38) Table 3: Ablation study with joint goal accuracy (%).", "DST ( k = 3 ).", "We conjecture that selecting two turns from the dialogue history may be sufficient, and introducing more turns may confuse the model.", "Different PrLMs We employ different pre-trained language models with different scales as the backbone for training and testing on MultiWOZ 2.2.", "Table 2 shows that the joint goal accuracy of other encoders decreases in varying degrees compared with ALBERT (large).", "The joint goal accuracy of BERT(base) decreases by 1.62%, but still outperforms the previous state-of-the-art performance on MultiWOZ 2.2.", "This demonstrates that our model achieves consistent performance gain in all fair 2326 Perspective(s) MultiWOZ 2.2 SN-DH 57.73 (-3.40) CT-DH 55.47 (-5.66) IMOR 55.11 (-6.02) SN-DH + CT-DH 59.56 (-1.57) SN-DH + IMOR 58.68 (-2.45) CT-DH + IMOR 56.79 (-4.34) SN-DH + CT-DH + IMOR 61.13 Table 4: Ablation study with joint goal accuracy (%).", "Effect of Core Components To explore the effectiveness of core components, we conduct an ablation study of them on MultiWOZ 2.2.", "As shown in Table 3, we observe that the performance degrades by 2.65% for joint goal accuracy when the State Update Predictor is removed.", "It is worth mentioning that this performance still outperforms the previous state-of-the-art performance, which demonstrates that the large performance gain of DiCoS-DST over other baselines comes from its dialogue selection.", "This is also supported by the observation that the performance of the model without the Multi-Perspective Dialogue Collaborative Selection module drops drastically (degrades by 6.19% for joint goal accuracy).", "In addition, when we remove the Cascaded Context Refinement module, we lose 1.38%, indicating the usefulness of interaction between different dialogue turns.", "Separate Perspective and Combinations We explore the performance of each separate perspective and their various combinations.", "When a perspective needs to be masked, we set their corresponding gating weights to 0.", "It can be observed in Table 4 that the SN-DH module has the greatest impact on performance, and the most effective MultiWOZ 2.2 k DiCoS-DST Granularity-Based 1 61.04 59.58 (-1.46) 2 61.13 59.88 (-1.25) 3 60.88 59.91 (-0.97) Table 6: The joint goal accuracy (%) of different k .", "combination of perspectives is the combination of SN-DH and CT-DH.", "Despite the simplicity of the mechanism of SN-DH, the association with the slot name straightforward reflects the importance of the dialogue.", "To solve the common problem of coreferences in complex dialogues, the Implicit Mention Oriented Reasoning module improves the performance close enough to the CT-DH.", "Graph Edges Ablation We investigate the effect of the different edges in the GNN.", "As shown in Table 5, the performance degradation is relatively obvious when the first, second, and third types of edges are removed separately.", "It indicates that the majority of the connections are indeed to construct the reasoning logic, while the correlation of the same domain's slots plays an auxiliary role.", "In addition, we design two comparative experiments.", "First, we start naively by fully connecting all dialogue nodes to enhance the interaction among dialogue turns.", "However, this change does not give a clear benefit.", "This is mostly because the initialization of the dialogue nodes using the dialogue representation output by MHSA already includes the contextual interactions between the dialogues.", "Second, we add a third type of edges between each slot-value pair node and all dialogue nodes without distinguishing the correspondence.", "We observe that this change does harm to the performance (de-grades by 1.09%).", "This reflects the importance of using other slots to explore their corresponding turns of dialogues when dealing with coreferences.", "DiCoS-DST filters out some distracting information by selecting relevant dialogues, but is it really beyond the granularity?", "To investigate it, we simulate the granularity and compare it with DiCoS-DST.", "Specifically, we use the maximum granularity (i.e., the number of dialogue turns spanning from the selected furthest dialogue turn to the current turn) and capture the corresponding dialogue contents as input to State Generator.", "As shown in Table 6, DiCoS-DST outperforms the granularity-based method by 1.46% ( k = 1 ), 1.25% ( k = 2 ), and 0.97% ( k = 3 ), indicating that there is still redundant information in the dialogue contents determined by the granularity that confuses the model.", "Table 7 shows the domain-specific results when we set different values for k ( k = 0 , 1 , 2) .", "In taxi and train domains, the performance of the model decreases significantly when k = 0 compared to k = 2 , implying that acquiring the values of the slots in these domains is highly dependent on the dialogue history.", "Nevertheless, there is no significant difference in the performance in attraction domain when we set different values for k .", "This indicates that the values of the slots in this domain can usually be simply obtained from the current turn dialogue, instead of using the dialogue history or resolving coreferences.", "We introduce an effective DiCoS-DST that dynamically selects the relevant dialogue contents corresponding to each slot from a combination of three perspectives.", "The dialogue collaborative selector module performs a comprehensive selection for each turn dialogue based on its relation to the slot name, its connection to the current turn dialogue, and the implicit mention oriented reasoning.", "Then only the selected dialogue contents are fed into State Generator, which explicitly minimizes the distracting information passed to the downstream state prediction.", "Our DiCoS-DST model achieves new state-of-the-art performance on the MultiWOZ benchmark, and achieves competitive performance on most other DST benchmark datasets.", "The potential relationship among the above perspectives is a promising research direction, and we will explore it for more than dialogue selection in the future.", "This work was supported by Beijing Natural Science Foundation(Grant No. 4222032) and BUPT Excellent Ph.D.", "Students Foundation.", "We thank the anonymous reviewers for their insightful comments.", "The claims in this paper match the experimental results.", "This work focuses on DST in task-oriented dialogue systems, and the improvements could have a positive impact on helping humans to complete goals more effectively in a more intelligent way of communication." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "objective", "objective", "abstain", "abstain", "objective", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "other", "other", "other", "abstain", "abstain" ]
[ "Language models keep track of complex linguistic information about the preceding context including, e.g., syntactic relations in a sentence.", "We investigate whether they also capture information beneficial for resolving pronominal anaphora in English.", "We analyze two state of the art models with LSTM and Transformer architectures, respectively, using probe tasks on a coreference annotated corpus.", "Our hypothesis is that language models will capture grammatical properties of anaphora (such as agreement between a pronoun and its antecedent), but not semantico-referential information (the fact that pronoun and antecedent refer to the same entity).", "Instead, we find evidence that models capture referential aspects to some extent though they are still much better at grammar.", "The Transformer outperforms the LSTM in all analyses, and exhibits in particular better semantico-referential abilities.", "Neural network-based language models (LMs) have been shown to learn relevant properties of language without being explicitly trained for them.", "In particular, recent work suggests that they are able to capture syntactic relations to a large extent (Gulordava et al., 2018; Kuncoro et al., 2018; Wilcox et al., 2018).", "In this paper, we extend this line of research to analyze whether they are able to capture referential aspects of language, focusing on anaphoric relations (pronoun-antecedent relations, as in she Yeping Wang in Figure 1).", "Previous work, such as Ji et al. (2017), Yang et al. (2017) and Cheng and Erk (2019), showed that augmenting language models with a component that uses an objective based on entity or coreference information improves their performance at . . . he 1 was elected to be president of the Peo-ple's Republic of China, and chairman of the 2 Central 2 Military 2 Commission 2 .", "Yeping 3 Wang 3 was born in Shanghai in 1926.", "She 3 studied in Shanghai Foreign Language College, and started working in 1949.", "For a long time, she 3 . . . Figure 1: Example from OntoNotes with a window of 60 tokens (as used in our first probe task).", "language modeling.", "Intuitively, in the example in Figure 1, understanding that the first she refers to Yeping Wang makes words related to studying or working more likely to follow than other kinds of words.", "That is, referential information helps language models do their task.", "The cited work includes explicit coreference guidance; however, since referential information is useful for language modeling, we expect language models to learn referential information even without explicit supervision.", "Here we analyze to what extent this is the case.", "We carry out our analysis using probe tasks, or tasks that check whether certain information is encoded in a model (Adi et al., 2016; Linzen et al., 2016; Conneau et al., 2018; Giulianelli et al., 2018).", "The reasoning is as follows: Even if a linguistic property is encoded in the network, it is not necessarily directly accessible through the model output; therefore, we train a probe model to predict a feature of interest, in this case anaphoric coreference, given the model's hidden representations as input.", "We focus on the two main linguistic levels that are relevant for coreference: morphosyntax , with grammatical constraints such as the fact that pronouns agree in number and gender with their antecedents, and semantics in particular reference, such as the fact that a pronoun refers to the same entity as its antecedent.", "Our hypothesis is that language models will capture grammatical properties, but not semantic information.", "This hypothesis is based on the observation that morphosyntax is a formal property of language that is easier to induce from co-occurrence patterns.", "The fact that language refers to entities is not obvious from language alone (Harnad, 1990), and LMs use only textual input.", "Instead, what we find is that, while it is true that language models are much better at grammar, they do show evidence of learning semantico-referential information to some extent.", "Our explanation for this unexpected, partially positive result is that, because the same entity underlies all its mentions, the contexts in which the mentions appear are coherent and distinct from those of mentions of other entities.", "For instance, in Figure 1, the second she mention gives additional information about Yeping Wang that is consistent with the information given in the previous sentence.", "This paper has two main contributions.", "The first is an analysis methodology to probe for referential information encoded in language models, on two linguistic levels (morphosyntax, semantics) and two kinds of context: local (around one paragraph of context), and global (document context).", "This methodology can be applied to any architecture.", "The second contribution is a deeper understanding of the referential capabilities of current language models, and of the differences between Transformers and LSTMs.", "The Transformer outperforms the LSTM in all the analyses.", "For morphosyntax, the Transformer and the LSTM have the same behavior with a performance difference; instead, they show different behavior with regard to semantico-referential information.", "Coreference and anaphora resolution (Mitkov, 2002; Poesio et al., 2016) are among the oldest topics in computational linguistics and have continued to receive a lot of attention in the last decade, as manifested by several shared tasks (Pradhan et al., 2011, 2012; Poesio et al., 2018).", "In our analysis we use the OntoNotes dataset (Hovy et al., 2006; Pradhan et al., 2012), developed within the coreference resolution community.", "Our probe tasks are related to coreference resolution; however, our goal is not to train a coreference system but to analyse whether language models extract features relevant for reference without explicit supervision.", "A recent line of work has focused on demonstrating that neural networks trained on language modeling, without any linguistic annotation, learn syntactic properties and relations such as agreement or filler-gap dependencies (Linzen et al., 2016; Gulordava et al., 2018; Kuncoro et al., 2018; Wilcox et al., 2018; Futrell et al., 2018).", "This is typically done by analysing the predictions of LMs on controlled sets of data.", "Part of this research uses probe models (also known as diagnostic models) to analyse the information contained in their hidden representations (Adi et al., 2016; Conneau et al., 2018; Hupkes et al., 2018; Lakretz et al., 2019; Giulianelli et al., 2018), as we do here applying it to referential information.", "There is less work on referential information than on syntactic properties such as subject-verb agreement.", "As for anaphoric reference, Peters et al. (2018) include a limited test using 904 sentences from OntoNotes.", "Their results suggest that LMs are able to do unsupervised coreference resolution to a certain extent; our first probe task can be seen as an extended version of their task obtaining more specific insights.", "Jumelet et al. (2019) analyze the kind of information that LSTM-based LMs use to make decisions in within-sentence anaphora.", "They find a strong male bias encoded in the network's weights, while the information in the input word embeddings only plays a role in the case of feminine pronouns.", "We analyze anaphora in longer spans (60 tokens / whole document) and include also a Transformer.", "The above work suggests that LMs capture morphosyntactic facts about anaphora to a large extent.", "There is much less evidence that LMs can capture a notion of entity, as that which nominal elements refer to, and that they are able to track entities across a discourse.", "Parvez et al. (2018) show that LSTM-based models have poor results on texts with a high presence of entities; Paperno (2014) that they cannot predict the last word of text fragments that require a context of a whole passage (as opposed to the last sentence only), with data that mostly contain nominal elements.", "Several models (Henaff et al., 2019; Yang et al., 2017; Ji et al., 2017) were developed as an augmentation of RNN LMs to deal better with entities, with the implicit assumption that standard models do that poorly.", "Aina et al. (2019) achieved good results on an entity-linking task, but showed that the network was not acquiring entity representations.", "As for Transformer-based architectures, recent research suggests that they give same or better contextualized representations in comparison with LSTM language models, and that they better encapsulate syntactic information (Goldberg, 2019; Wolf, 2019).", "On the other hand, van Schijndel et al. (2019) show that big Transformer model representations perform on par or even poorer than smaller LSTMs on tasks such as number agreement or coordination, and that, like LSTMs, they have the problem that agreement accuracy decreases as the subject becomes more distant from its verb.", "Most recent work on analysis of linguistic phenomena in NNs focuses on BERT (Ten-ney et al., 2019; Clark et al., 2019; Reif et al., 2019; Broscheit, 2019).", "In this paper we chose to use TransformerXL (Dai et al., 2019) as our Transformer model, and not BERT, for comparability: We wanted to compare the two most standard architectures for LMs on as equal ground as possible, and the two chosen models, TransformerXL and AWD-LSTM (Merity et al., 2017), share the same training objective and are trained on the same data, with comparable vocabularies.", "To shed light into which morphosyntactic information LMs encode that is useful for coreference, we train a simple anaphora resolution probe model using the hidden layers of LMs as input.", "By the logic of probe tasks, if the probe model is successful then that means that the relevant information is encoded in the hidden states, and error analysis can provide insight into which kinds of information are available.", "Data We train our probe models on data from OntoNotes 5.0 (Weischedel et al., 2013).", "We use the annotated coreference chains, as well as the provided part-of-speech tags (the latter only for analysis purposes).", "We take all pronouns that have at least one antecedent in a 60-token context window; the task of Tokens Datapoints Train 191,830 4,949 Dev 275,201 4,556 Test 2,026,565 45,665 Table 1: Dataset statistics for first probe task.", "the probe model is to identify their antecedent.", "1 An example datapoint is provided in Figure 1 above (note that a window of 60 tokens allows us to check anaphora beyond the sentence).", "For simplicity, antecedents are tokens, but typically there is more than one possible token antecedent for a given pronoun: A mention can span several tokens ( Yeping Wang ), and the window can contain several mentions from the same coreference chain ( Yeping Wang and the first She in Figure 1); we consider any of the tokens a correct answer.", "Note that we are not training the model to explicitly identify mentions, their spans or the complete coreference chains, but to identify the tokens that are antecedents of the target pronoun.", "To obtain enough data for analysis, especially for low-frequency phenomena, we follow Linzen et al. (2016) in reversing the original partitions of the corpus, using the original test set for training and the original training set for testing.", "2 In addition, we focus on the OntoNotes documents that belong to narrative text sections because the dialogue data does not come with turn segmentation.", "3 Resulting data statistics for our task are provided in Table", "1. Language models The base language models we use are AWD-LSTM (Merity et al., 2017) and TransformerXL (Dai et al., 2019), two state-of-the art models with the most standard architec-1 We also experimented with windows 20 and 200, obtaining a similar picture.", "2 Using little training data has also been shown to lessen the possibility of confounds in the probe model results; in particular, it makes it more difficult for the probe model to exploit regularities in the training data rather than capturing the analyzed model's ability to capture a phenomenon (He-witt and Liang, 2019).", "See Voita and Titov (2020) for a theoretical justification from a information-theoretic perspective.", "Results on the original split confirm that the conclusions of the paper are robust: we see an increase in performance of around 3% overall, as could be expected because we use more data, but the same behavior patterns (on the data that can be compared).", "3 We keep newswire (NW), broadcast news (BN), magazine (MZ), web data (WB), and pivot text (PT), removing broadcast conversation (BC), telephone conversation (TC).", "tures for language modeling as of 2020 (LSTM, Transformer).", "We chose these models for comparison because they are trained on the same dataset (Wiki103; Merity et al., 2016), they have a comparable vocabulary, and they are both very strong language models, with perplexities of 24 for TransformerXL and 33 for AWD-LSTM.", "TransformerXL is a bit larger than AWD-LSTM, though (151 million parameters compared to 126), which should be kept in mind when assessing results.", "4 Probe model For each word x i in the window of size m preceding the target pronoun x t , we obtain its contextualized representation h i from the last hidden layer of the language model (Eq. 1).", "The probe model takes this representation as input and is trained to map it onto a vector o i using a non-linear transformation (Eq. 2).", "The target pronoun representation is transformed in the same way.", "The dot products between these transformed representations of target and context word vectors give the attention weights ref i (Eq. 3) representing the similarity between two representations.", "The weights are transformed into probabilities using the softmax function (Eq. 4).", "Like this we obtain a probability distribution p i over context tokens.", "During training, the probe model's objective is to assign higher probabilities (and thus attention weights) to correct antecedents, and lower probabilities to incorrect ones, through the use of the Kullback-Leibler divergence loss (Eq. 5).", "We use the KL loss because we frame the task in terms of a probability distribution over mentions in the context.", "For the reasons discussed above, there can be k > 1 correct predictions out of m tokens in the window.", "We assume that gold probability distribution is uniform over k correct tokens, that is, each of these tokens has a probability p i = 1 k and all other tokens have a probability of", "0. 5 4 We also trained an in-house LSTM on data that are more similar to those of OntoNotes and a smaller vocabulary.", "The results for this model (not reported) follow the same patterns as those found for the AWD-LSTM and TransformerXL models, although the performance on this probe task is much higher than that of AWD-LSTM.", "5 Note however that minimizing KL divergence and minimizing cross-entropy gives the same results, because KLdiv ( p || q ) = CrossEntropy ( p, q ) entropy ( p ) , and entropy ( p ) is constant.", "Technically, in PyTorch the crossentropy loss is only implemented for classification task targets, while the more general KL loss is available for predicting probability distributions.", "h i = LST M ( x i ) (1) o i = ReLU ( W h i + b ) (2) ref i = o i (cid:12) o t , i [ t m, t 1] (3) p i = softmax ( ref i ) , i [ t m, t 1] (4) L = KL ( p i , p i ) (5) As mentioned above, we fix m = 60 .", "We train the probe model for 50 epochs with a learning rate of 1e-5 and ADAM as optimizer.", "The transformed vectors o i have a dimensionality of 650 in the case of both models in comparison with h i which is 400 for the AWD-LSTM and 1024 for TransformerXL.", "Baselines We report two rule-based baselines that give relatively good performance in anaphora resolution: Referring to the previous entity (given by the oracle gold annotation; in Figure 1, she would refer to the previous She ), and always pointing to the token in the window that has the same form as the target pronoun (that is, in Figure 1, she She we ignore capitalization).", "In addition, to compare the result of the probe model with the input representations, we also report an unsupervised baseline: Referring to the token in the window that has the highest similarity cos ( h i , h t ) to the target pronoun, i.e., relying on the similarity between the non-transformed hidden representations.", "Table 2 summarizes the results of the pronominal anaphora probe task.", "The probe model trained on top of the LSTM improves a bit over the strongest baseline, and that of the Transformer does so substantially (75.9 vs. 61.3; the LSTM obtains 64.8).", "This performance suggests that the LMs use more information than simple heuristics like referring to a token with the same form.", "be expected: The raw similarity between hidden states is based on many more aspects than those related to reference, given that hidden states are responsible for capturing all the contextual features that are relevant for word prediction.", "This is why a probe model is needed to distill the reference-related information from the hidden layers.", "A single non-linear layer trained on only 5K datapoints improves performance by 23-28 abso-lute accuracy points (supervised vs. unsupervised results), which suggests that the referential information in the hidden layers is easy to extract.", "Behaviorally, the unsupervised hidden layers are quite similar to the baselines.", "First, they are biased towards tokens of the same form: in 27.1% of the cases, the LSTM layer of the pronoun presents the highest similarity to a token with the same form; 29.1% in the case of the Transformer.", "Second, they prefer close antecedents, although the LSTM presents this recency bias to a much higher degree: in 27.8% of the cases, the LSTM layer of the pronoun has the highest similarity to the previous token (16.4% in the Transformer).", "The attention mechanism of the Transformer gives access to a broader context and allows it to overcome the recency bias to some degree.", "The great difference in performance between AWD-LSTM and TransformerXL could suggest that the latter is using different strategies compared to the former.", "Instead, except for the recency bias, what we find are exactly the same patterns in behavior, with a systematic 10% accuracy gap.", "For this reason, although we provide results for both models everywhere to show that this observation indeed holds, in this section we will mostly focus on the Transformer when commenting results.", "The models clearly learn grammatical constraints related to anaphora that are well-studied in the literature and are relied upon by traditional anaphora resolution models (Sukthanker et al., 2018).", "First, as shown in Table 3, the Transformer identi-fies mentions (elements inside some coreference chain) in 92.6% of the cases.", "Moreover, it correctly learns that pronouns typically refer to nominal elements (almost 95% identified antecedents are pronouns, proper nouns, and elements within a noun phrase headed by a common noun).", "Note that pronouns can also have non-nominal an-LSTM Transformer in chain 90.2% 92.6% POS Perc.", "tecedents, although these are the minority of the annotations in OntoNotes (cf. example 4 in Figure 3, where it refers to an event).", "Even in the cases in which the Transformer points to elements outside of a chain (7.4%), it points to nominal elements 87% of the time (not shown in the ta-ble).", "The model is most accurate when referring to pronouns (82.6% accuracy), while noun phrases are the hardest category (62.3%).", "This is consistent with the strategies that the model learns, since it largely relies on pronominal agreement, as described below.", "Second, not only do the models mostly point to nominal elements, but they also identify the morphosyntactic properties of pronouns and learn that they should agree with their antecedents in gender and number.", "Figure 2 shows the distribution of pronoun antecedents that the Transformer predicts, for the six most frequent target pronouns (see the Supplementary material for the corresponding LSTM figure).", "Its preferred type of antecedent are pronouns of the same form, but it is also able to point to other pronouns agreeing in number and gender.", "For instance, pronoun he points to 3rd person, masculine, singular pronouns (mostly he , but also his , him ) a pattern consistent across all pronouns.", "Figure 2 is restricted to pronouns; Table 4 shows that the model also largely follows number agreement when predicting antecedents within noun phrases (the table collapses common noun and proper noun antecedents).", "Given a singular pronoun, the model chooses a singular antecedent 98% of the time; given a plural pronoun, it identi-fies a plural antecedent in 73% of the cases.", "Note that in cases of plural pronouns such as Figure 2: Pronominal agreement with Transformer probe model: Proportion of cases in which elements in the rows corefer with elements in the columns.", "they it is common that the referent be a singular noun (e.g., the audience in example 3, Figure 3), reflected by the reasonable accuracy of the Transformer in pl-sg cases (53.1%).", "The language model clearly captures morphosyntactic (grammatical) properties that constrain anaphora resolution; in this section, we show that it struggles more with is the semantic (referential) aspect, but it still captures it to some extent.", "If the model were able to model entities, it should be robust to distractors , that is, mentions in the context that are not antecedents in Figure 1, he and the Central Military Commission .", "Figure 4 shows that the accuracy for the Transformer decreases as does the proportion of gold mentions.", "We compute this proportion as the number of gold mentions in the 60-token window divided by the total number of mentions in the same window.", "When there are no distractors (gold proportion = 1), accuracy is very high, which is to be expected given that the model learnt to identify mentions in the first place (cf. previous section).", "The more distractors (i.e., the lower the proportion of gold mentions), the lower the accuracy; however, accuracy decreases rather gracefully.", "Even when there are only 10% gold mentions in the window, accuracy for most pronoun types is still around 60-80%.", "The exception is it , which is the most difficult pronoun for the model, presumably because it can refer to many kinds of antecedents.", "6 Figure 4 thus paints a nuanced picture: distractors confuse the model, but they do not fool it completely.", "Given the results in the previous section, we expect that distractors sharing morphosyntactic features will be particularly challenging.", "Table 5 confirms this, zooming in into pronominal distractors.", "We consider a datapoint having a pronominal distractor if one of the antecedents is a pronoun pointing to another entity.", "When there are no pronominal distractors (25.9% of the test set), the accuracy of the Transformer is 81.8%; with at least one distractor, it goes down to 73.8% clearly worse but not dramatically so.", "However, in cases where anaphoric pronoun and antecedent have the same gender, number, or are the same pronoun, we get much lower accuracies (48.6, 65.3, and 49.1, respec-tively).", "This suggests that that the model overly relies on morphosyntactic features and recency (see previous section).", "7 However, accuracy in these cases goes down but is still decent, compared to a reasonable baseline (last column in the table).", "For each target anaphoric pronoun, we calculate baseline accuracy as the percentage of gold pronouns in the window (pronouns that are in the same chain as the target), that is, number of gold pronouns divided 6 While most personal pronouns refer to people, which are relatively homogeneous kinds of referents, it refers to very varied kinds of referents.", "Qualitative analysis suggests that the model is quite successful when it refers to concrete entities ( province , peanut ), but much less when it refers to abstract objects like propositions or events, as in example 4 of Figure 3 (where it refers to the event of trying to improperly influence a witness).", "A quantitative check confirms this hypothesis: Cases in which the model fails have around 18% of verbal references, compared to less than 2% for cases in which the model is right.", "7 Among the hardest cases are those where two coreference chains in the window have the same pronoun (e.g. he ) or gender (e.g. he-his ).", "Most of these cases appear when the text includes reported speech (see Figure 3, example 1).", "Otherwise, there are few cases of such local ambiguity, which is presumably avoided by language speakers.", "However, qualitative analysis suggests that the presence of distractors is also problematic in the case of nouns, as illustrated in example 2 of Figure 3, where the model is presumably confused by a noun of the same gender and number as the pronoun ( priest vs. Peter-him ).", "by the total number of pronouns in the window. Then we calculate the average of this accuracy over the respective subset (no distractors / distractors / same gender, etc.). The baseline when there are no distractors is by definition 100%; when there are distractors, it ranges between 15.7 and 32%. All model accuracies are well above this baseline.", "The results thus suggest that the models are able to distinguish mentions of different entities to some extent, although they are far worse at this than at capturing morphosyntactic features. In the following subsection, we provide further support for this interpretation.", "Our last piece of analysis looks at whole documents. We aim at testing whether the hidden representations of the language models contain information that can help distinguish mentions of the same entity from mentions of some other entity,", "even if they are of the same form; for instance, a pronoun she referring to two different women. We use coreference chains to identify the tokens referring to the same entity, and train a probe model to determine when two pronouns are referring to the same entity, that is, whether they are part of the same coreference chain in a document. In the previous probe task, where the model was trained to find a correct local antecedent, the model could use cues such as linear distance and syntactic relations; here it should rely on more persistent entity-related features in the hidden representations.", "Experimental Setup. We focus on pronouns because they cannot be disambiguated on the basis of lexical features. We use the same train/test partition as in the first probe task. For each datapoint, we have two pronouns: x and y , which can either come from the same chain, or not. Again, we take each pronoun to be represented by the last hidden layer representation of the language model (Eq. (1)): h x and h y . We call this representation unsupervised", "unsupervised , and will compare it to the supervised one, obtained as follows.", "Similarly to the previous probe task, the embeddings are transformed through a learnt linear transformation to a 400-dimensional vector to extract features relevant for the entity identification task (Eqs. (6) and (7)). We take the cosine between the transformed representations as the similarity between the two pronouns.", "We take as positive datapoints contain two pronouns belonging to the same chain, as negative datapoints two pronouns from two different chains. During training, for each document, we extract all positive pairs and then randomly select the same number of negative pairs. The model optimises max-margin loss on these datapoints (Eq. (8), where x and y belong to the same chain and x (cid:48) and y (cid:48) belong to two different chains).", "Results Figure 5 plots the similarities between positive and negative pairs (solid and dashed lines, respectively) for the two analyzed language models, compared to linear distance in the text. The left graph corresponds to unsupervised similarities, the right graph to supervised similarities. To control for token form effect, we only include data with the same pronoun pairs in this graph. Three results stand out. First, despite training with a global objective, with no linear information, similarities are negatively correlated with linear distance in text. This is consistent with the tendency of the unsupervised cosine baseline of pointing to the closest token (see Section 3).", "The second result is that, crucially, after controlling both for distance and for pronoun form, similarities are systematically higher for coreferring pronoun pairs than for non-coreferring ones. Thus, some properties make their way into the hidden representations (and the probe model) that make coreferring mentions distinct from non-correferring mentions modulo distance: If we attempt to globally distinguish chains, we instead obtain null results (see Supplementary Materials). This is because, with linear distance, the similarity in the entity-centered representation space shrinks very fast; same-chain mentions that are", "further away have lower average similarities than different-chain mentions that are nearby.", "Finally, the third main result is that the supervised model is able to extract discriminating information from the hidden layers to a much larger extent in the Transformer than in the LSTM (cf. distance between blue and red lines, respectively). We interpret this to mean that such information is encoded to a larger extent in the Transformer. Also note that the supervised LSTM model is more sensitive to linear distance than any of the other representations (cf. the steeper curves between 0-100 token distances). As we signaled in the previous section, LSTM is more prone to recency biases, and it looks like global representations contain less entity-related information than in the case of the Transformer, such that the supervised model defaults to recency. We conclude from this that the Transformer accounts for semantico-referential aspects better than the LSTM.", "Overall, the results suggest that token form and proximity in text remain the main properties encoded in the hidden states of entity mentions, but other properties that discriminate between coreferring and non-corefering mentions are present to some extent, allowing for partial discrimination.", "Previous work has provided robust evidence that language models capture grammatical information without being explicitly trained to do so (Linzen et al., 2016; Gulordava et al., 2018). In this paper, we have analyzed to what extent they learn referential aspects of language, focusing on anaphora. We have tested two models representative of the prevailing architectures (Transformer, LSTM), and our methodology can be extended to any other architecture.", "We find that the two models behave similarly, but the Transformer performs consistently better (around 10% higher accuracy in the probe tasks). 8 Future work should test other architectures, like CNN-based LMs and LSTMs with attention, to provide additional insights into the linguistic capabilities of language models.", "As expected, our results show that language models capture morphosyntactic facts about anaphora: Based on the information in the hidden layers, a simple linear transformation learns to link", "8 With the caveat that the model we tested is slightly bigger than its LSTM counterpart.", "pronouns to other pronouns or noun phrases, and to do so largely respecting agreement constraints in gender and number.", "Although it is much harder for models to induce a more global notion of entity (what we have called semantico-referential aspects), models seem to encode entity-specific information to some extent. Models get confused when there are other mentions in the context, especially if they match in some morphosyntactic feature, but less than could be expected; and they show some limited ability to distinguish mentions that have the same form but are in different coreference chains, though hampered by their heavy recency bias. The recency bias affects LSTMs more, but is also found in Transformers, consistent with previous work on syntax (van Schijndel et al., 2019).", "Our results thus suggest that language models are more successful at learning grammatical constraints than they are at learning truly referential information, in the sense of capturing the fact that we use language to refer to entities in the world; however, they still do surprisingly well at referential aspects, given that they are trained on text alone. Future work should investigate where these primitive referential abilities stem from and how they can be fostered in future architectures and training setups for language modeling, and neural models more generally.", "We gratefully acknowledge the AMORE team for the feedback, advice and support. We are also grateful to the anonymous reviewers for their valuable comments. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research", "research and innovation programme (grant agreement No 715154), and from the Spanish Ramon y Cajal programme (grant RYC-2015-18907). We thankfully acknowledge the computer resources at CTE-POWER and the technical support provided by Barcelona Supercomputing Center (RES-IM-2019-3-0006). We are grateful to the NVIDIA Corporation for the donation of GPUs used for this research. We are also very grateful to the Pytorch developers. This paper reflects the authors' view only, and the EU is not responsible for any use that may be made of the information it contains." ]
[ "abstain", "objective", "method", "abstain", "result", "abstain", "abstain", "abstain", "objective", "other", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "objective", "other", "other", "method", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "method", "method", "other", "other", "abstain", "other", "abstain", "other", "result", "result", "abstain", "abstain", "result", "other", "other", "other" ]
[ "Detecting online hate is a difficult task that even state-of-the-art models struggle with.", "Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score.", "However, this approach makes it difficult to identify specific model weak points.", "It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets.", "To enable more targeted diagnostic insights, we introduce HATECHECK , a suite of functional tests for hate speech detection models.", "We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders.", "We craft test cases for each functionality and validate their quality through a structured annotation process.", "To illustrate HATECHECK 's utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.", "Hate speech detection models play an important role in online content moderation and enable scien-tific analyses of online hate more generally.", "This has motivated much research in NLP and the social sciences.", "However, even state-of-the-art models exhibit substantial weaknesses (see Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; Vidgen et al., 2019; Mishra et al., 2020, for reviews).", "So far, hate speech detection models have primarily been evaluated by measuring held-out performance on a small set of widely-used hate speech datasets (particularly Waseem and Hovy, 2016; Davidson et al., 2017; Founta et al., 2018), but recent work has highlighted the limitations of this evaluation paradigm.", "Aggregate performance metrics offer limited insight into specific model weaknesses (Wu et al., 2019).", "Further, if there are systematic gaps and biases in training data, models may perform deceptively well on corresponding held-out test sets by learning simple decision rules rather than encoding a more generalisable understanding of the task (e.g. Niven and Kao, 2019; Geva et al., 2019; Shah et al., 2020).", "The latter issue is particularly relevant to hate speech detection since current hate speech datasets vary in data source, sampling strategy and annotation process (Vidgen and Derczynski, 2020; Poletto et al., 2020), and are known to exhibit annotator biases (Waseem, 2016; Waseem et al., 2018; Sap et al., 2019) as well as topic and author biases (Wiegand et al., 2019; Nejadgholi and Kiritchenko, 2020).", "Correspondingly, models trained on such datasets have been shown to be overly sensitive to lexical features such as group identifiers (Park et al., 2018; Dixon et al., 2018; Kennedy et al., 2020), and to generalise poorly to other datasets (Nejadgholi and Kiritchenko, 2020; Samory et al., 2020).", "Therefore, held-out performance on current hate speech datasets is an incomplete and potentially misleading measure of model quality.", "To enable more targeted diagnostic insights, we introduce HATECHECK , a suite of functional tests for hate speech detection models.", "Functional testing, also known as black-box testing, is a testing framework from software engineering that assesses different functionalities of a given model by validating its output on sets of targeted test cases (Beizer, 1995).", "Ribeiro et al. (2020) show how such a framework can be used for structured model evaluation across diverse NLP tasks.", "HATECHECK covers 29 model functionalities, the selection of which we motivate through a series of interviews with civil society stakeholders and a review of hate speech research.", "Each functionality is tested by a separate functional test.", "We create 18 functional tests corresponding to distinct expressions of hate.", "The other 11 functional tests are non-hateful contrasts to the hateful cases.", "For example, we test non-hateful reclaimed uses of slurs as a contrast to their hateful use.", "Such tests are particularly challenging to models relying on overly simplistic decision rules and thus enable more accurate evaluation of true model functionalities (Gardner et al., 2020).", "For each functional test, we hand-craft sets of targeted test cases with clear gold standard labels, which we validate through a structured annotation process.", "1 HATECHECK is broadly applicable across English-language hate speech detection models.", "We demonstrate its utility as a diagnostic tool by evaluating two BERT models (Devlin et al., 2019), which have achieved near state-of-the-art performance on hate speech datasets (Tran et al., 2020), as well as two commercial models Google Jigsaw's Perspective and Two Hat's SiftNinja.", "2 When tested with HATECHECK , all models appear overly sensitive to specific keywords such as slurs.", "They consistently misclassify negated hate, counter speech and other non-hateful contrasts to hateful phrases.", "Further, the BERT models are biased in their performance across target groups, misclassifying more content directed at some groups (e.g. women) than at others.", "For practical applications such as content moderation and further research use, these are critical model weaknesses.", "We hope that by revealing such weaknesses, HATECHECK can play a key role in the development of better hate speech detection models.", "Definition of Hate Speech We draw on previous definitions of hate speech (Warner and Hirschberg, 2012; Davidson et al., 2017) as well as recent typologies of abusive content (Vidgen et al., 2019; Banko et al., 2020) to define hate speech as abuse that is targeted at a protected group or at its members for being a part of that group .", "We define protected groups based on age, disability, gender identity, familial status, pregnancy, race, national or ethnic origins, religion, sex or sexual orientation, which broadly reflects international legal consensus (particularly the UK's 2010 Equality Act, the US 1964 Civil Rights Act and the EU's Charter of Fundamental Rights).", "Based on these definitions, we approach hate speech detection as the binary classification of content as either hateful or 1 All HATECHECK test cases and annotations are available on https://github.com/paul-rottger/hatecheck-data.", "non-hateful.", "Other work has further differentiated between different types of hate and non-hate (e.g. Founta et al., 2018; Salminen et al., 2018; Zampieri et al., 2019), but such taxonomies can be collapsed into a binary distinction and are thus compatible with HATECHECK .", "Content Warning This article contains examples of hateful and abusive language.", "All examples are taken from HATECHECK to illustrate its composition.", "Examples are quoted verbatim, except for hateful slurs and profanity, for which the first vowel is replaced with an asterisk.", "In software engineering, a program has a certain functionality if it meets a specified input/output behaviour (ISO/IEC/IEEE 24765:2017, E).", "Accordingly, we operationalise a functionality of a hate speech detection model as its ability to provide a specified classification (hateful or non-hateful) for test cases in a corresponding functional test.", "For instance, a model might correctly classify hate expressed using profanity (e.g F*ck all black peo-ple) but misclassify non-hateful uses of profanity (e.g. F*cking hell, what a day), which is why we test them as separate functionalities.", "Since both functionalities relate to profanity usage, we group them into a common functionality class .", "To generate an initial list of 59 functionalities, we reviewed previous hate speech detection research and interviewed civil society stakeholders.", "Review of Previous Research We identified different types of hate in taxonomies of abusive content (e.g. Zampieri et al., 2019; Banko et al., 2020; Kurrek et al., 2020).", "We also identified likely model weaknesses based on error analyses (e.g. Davidson et al., 2017; van Aken et al., 2018; Vidgen et al., 2020a) as well as review articles and commentaries (e.g. Schmidt and Wiegand, 2017; Fortuna and Nunes, 2018; Vidgen et al., 2019).", "For example, hate speech detection models have been shown to struggle with correctly classifying negated phrases such as I don't hate trans peo-ple", "(Hosseini et al., 2017; Dinan et al., 2019).", "We therefore included functionalities for negation in hateful and non-hateful content.", "Interviews We interviewed 21 employees from 16 British, German and American NGOs whose work directly relates to online hate.", "Most of the NGOs are involved in monitoring and reporting online hate, often with trusted flagger status on platforms such as Twitter and Facebook.", "Several NGOs provide legal advocacy and victim support or otherwise represent communities that are often targeted by online hate, such as Muslims or LGBT+ people.", "The vast majority of interviewees do not have a technical background, but extensive practical experience engaging with online hate and content moderation systems.", "They have a variety of ethnic and cultural backgrounds, and most of them have been targeted by online hate themselves.", "The interviews were semi-structured.", "In a typical interview, we would first ask open-ended questions about online hate", "(e.g. What do you think are the biggest challenges in tackling online hate?)", "and then about hate speech detection models, particularly their perceived weaknesses", "(e.g. What sort of content have you seen moderation systems get wrong?)", "and potential improvements, unbounded by technical feasibility", "(e.g. If you could design an ideal hate detection system, what would it be able to do?).", "Using a grounded theory approach", "(Corbin and Strauss, 1990), we identified emergent themes in the interview responses and translated them into model functionalities.", "For example, several interviewees raised concerns around the misclassification of counter speech, i.e. direct responses to hateful content", "(e.g. I4: people will be quoting someone, calling that person out [...] but that will get picked up by the system).", "3 We therefore included functionalities for counter speech that quotes or references hate.", "Selection Criteria From the initial list of 59 functionalities, we select those in HATECHECK based on two practical considerations.", "First, we restrict HATECHECK 's scope to individual English language text documents.", "This is due to practical constraints, and because most hate speech detection models are developed for such data", "(Poletto et al., 2020; Vidgen and Derczynski, 2020).", "Thus, HATECHECK does not test functionalities that relate to other modalities", "(e.g. images)", "3 When quoting anonymised responses throughout this article, we identify each interview participant by a unique ID.", "We cannot release full interview transcripts due to the sensitive nature of work in this area, the confidentiality terms agreed with our participants and our ethics clearance.", "or languages, or that require context", "(e.g. conversational or social)", "beyond individual documents.", "Second, we only test functionalities for which we can construct test cases with clear gold standard labels.", "Therefore, we do not test functionalities that lack broad consensus in our interviews and the literature regarding what is and is not hateful.", "The use of humour, for instance, has been highlighted as an important challenge for hate speech research", "(van Aken et al., 2018; Qian et al., 2018; Vidgen et al., 2020a).", "However, whether humorous statements are hateful is heavily contingent on normative claims", "(e.g. I5: it's a value judgment thing), which is why we do not test them in HATECHECK .", "HATECHECKHATECHECK comprises 29 functional tests grouped into 11 classes.", "Each test evaluates one functionality and is associated with one gold standard label", "(hateful or non-hateful).", "Each functional test has a set of corresponding test cases.", "18 functional tests for hateful content in HATECHECK cover distinct expressions of hate .", "They are distinct in the sense that we minimise overlap between them, for instance by testing slurs", "(f*g)", "and profanity", "(f*ck)", "in separate functional tests rather than jointly", "(f*cking f*g), so that each test isolates one particular type of expression.", "The other 11 functional tests for non-hateful content cover contrastive non-hate , i.e. content which shares linguistic features with hateful expressions.", "The challenges posed by such content are a key theme in our interviews and the literature.", "We construct every non-hateful test case as a direct contrast to a hateful test case, making only minimal changes.", "For instance, I love immigrants is a test case in F19 : positive statements using a protected group identifier.", "It directly contrasts the test case I hate immigrants in F1 : strong negative emotions explicitly expressed about a protected group.", "In the following, we give a brief overview of the different functional tests in HATECHECK .", "Table 1 provides corresponding example test cases.", "Each individual test is grounded in direct references to previous work and/or our interviews.", "These references are detailed in Appendix B. Distinct Expressions of Hate HATECHECK tests different types of derogatory hate speech", "( F1-4 )", "and hate expressed through threatening language", "( F5/6 ).", "It tests hate expressed using slurs", "( F7 )", "and profanity", "( F10 ).", "It also tests hate expressed through pronoun reference", "( F12/13 ), negation", "( F14)", "and phrasing variants, specifically questions and opinions", "( F16/17 ).", "Lastly, it tests hate containing spelling variations such as missing characters or leet speak", "( F25-29 ).", "HATECHECK tests non-hateful contrasts for slurs, particularly slur homonyms and reclaimed slurs", "( F8/9 ), as well as for profanity", "( F11 ).", "It tests non-hateful contrasts that use negation, i.e. negated hate", "( F15 ).", "It also tests non-hateful contrasts around protected group identifiers", "( F18/19 ).", "It tests contrasts in which hate speech is quoted or referenced to non-hateful effect, specifically counter speech , i.e. direct responses to hate speech which seek to act against it", "( F20/21 ).", "Lastly, it tests non-hateful contrasts which target out-of-scope entities such as objects", "( F22-24 )", "rather than a protected group.", "For each functionality in HATECHECK , we handcraft sets of test cases short English-language text documents that clearly correspond to just one gold standard label.", "Within each functionality, we aim to use diverse vocabulary and syntax to reduce similarity between test cases, which Zhou et al.", "(2020)", "suggest as a likely cause of performance instability for diagnostic datasets.", "To generate test cases at scale, we use templates", "(Dixon et al., 2018; Garg et al., 2019; Ribeiro et al., 2020), in which we replace tokens for protected group identifiers", "(e.g. I hate [IDENTITY].)", "and slurs", "(e.g. You are just a [SLUR] to me.).", "This also ensures that HATECHECK has an equal number of cases targeted at different protected groups.", "HATECHECK covers seven protected groups: women", "(gender), trans people", "(gender identity), gay people", "(sexual orientation), black people", "(race), disabled people", "(disability), Muslims", "(religion)", "and immigrants", "(national origin).", "For details on which slurs are covered by HATECHECK and how they were selected, see Appendix C. In total, we generate 3,901 cases, 3,495 of which come from 460 templates.", "The other 406 cases do not use template tokens", "(e.g. Sh*t, I forgot my keys)", "and are thus crafted individually.", "The average length of cases is 8.87 words", "(std. dev. = 3.33)", "or 48.26 characters", "(std. dev. = 16.88).", "2,659 of the 3,901 cases", "(68.2%)", "are hateful and 1,242", "(31.8%)", "are non-hateful.", "Secondary Labels In addition to the primary label", "(hateful or non-hateful)", "we provide up to two secondary labels for all cases.", "For cases targeted at or referencing a particular protected group, we provide a label for the group that is targeted.", "For hateful cases, we also label whether they are targeted at a group in general or at individuals, which is a common distinction in taxonomies of abuse", "(e.g. Waseem et al., 2017; Zampieri et al., 2019).", "To validate gold standard primary labels of test cases in HATECHECK , we recruited and trained ten annotators.", "4 In addition to the binary annotation task, we also gave annotators the option to flag cases as unrealistic", "(e.g. nonsensical)", "to further confirm data quality.", "Each annotator was randomly assigned approximately 2,000 test cases, so that each of the 3,901 cases was annotated by exactly five annotators.", "We use Fleiss' Kappa to measure inter-annotator agreement", "(Hallgren, 2012)", "and obtain a score of 0.93, which indicates almost per-fect agreement", "(Landis and Koch, 1977).", "For 3,879", "(99.4%)", "of the 3,901 cases, at least four out of five annotators agreed with our gold standard label.", "For 22 cases, agreement was less than four out of five.", "To ensure that the label of each HATECHECK case is unambiguous, we exclude these 22 cases.", "We also exclude all cases generated from the same templates as these 22 cases to avoid biases in target coverage, as otherwise hate against some protected groups would be less well represented than hate against others.", "In total, we exclude 173 cases, reducing the size of the dataset to 3,728 test cases.", "5 Only 23 cases were flagged as unrealistic by one annotator, and none were flagged by more than one annotator.", "Thus, we do not exclude any test cases for being unrealistic.", "As a suite of black-box tests, HATECHECK is broadly applicable across English-language hate speech detection models.", "Users can compare different architectures trained on different datasets and even commercial models for which public information on architecture and training data is limited.", "4 For information on annotator training, their background and demographics, see the data statement in Appendix A. 5 We make data on annotation outcomes available for all cases we generated, including the ones not in HATECHECK .", "Pre-Trained Transformer Models We test an uncased BERT-base model (Devlin et al., 2019), which has been shown to achieve near state-of-the-art performance on several abuse detection tasks (Tran et al., 2020).", "We fine-tune BERT on two widely-used hate speech datasets from Davidson et al. (2017) and Founta et al. (2018).", "The Davidson et al. (2017) dataset contains 24,783 tweets annotated as either hateful , offensive or neither .", "The Founta et al. (2018) dataset comprises 99,996 tweets annotated as hateful , abusive , spam and normal .", "For both datasets, we collapse labels other than hateful into a single non-hateful label to match HATECHECK 's binary format.", "This is aligned with the original multi-label setup of the two datasets.", "Davidson et al. (2017), for instance, explicitly characterise offensive content in their dataset as non-hateful.", "Respectively, hateful cases make up 5.8% and 5.0% of the datasets.", "Details on both datasets and pre-processing steps can be found in Appendix D. In the following, we denote BERT fine-tuned on binary Davidson et al. (2017) data by B-D and BERT fine-tuned on binary Founta et al. (2018) data by B-F .", "To account for class imbalance, we use class weights emphasising the hateful minority class (He and Garcia, 2009).", "For both datasets, we use a stratified 80/10/10 train/dev/test split.", "Macro F1 on the held-out test sets is 70.8 for B-D and 70.3 for B-F .", "6 Details on model training and parameters can be found in Appendix E. Commercial Models We test Google Jigsaw's Perspective ( P ) and Two Hat's SiftNinja ( SN ).", "7 Both are popular models for content moderation developed by major tech companies that can be accessed by registered users via an API.", "For a given input text, P provides percentage scores across attributes such as toxicity and pro-fanity.", "We use identity attack, which aims at identifying negative or hateful comments targeting someone because of their identity and thus aligns closely with our definition of hate speech (1).", "We convert the percentage score to a binary label using a cutoff of 50%.", "We tested P in December 2020.", "For SN , we use its hate speech' attribute (at-tacks [on] a person or group on the basis of personal 6 For better comparability to previous work, we also fine-tuned unweighted versions of our models on the original multiclass D and F data. Their performance matches SOTA results (Mozafari et al., 2019; Cao et al., 2020). Details in Appx. F. 7 www.perspectiveapi.com and www.siftninja.com attributes or identities), which distinguishes between mild', bad', severe' and no' hate.", "We mark all but no' hate as hateful' to obtain binary labels.", "We tested SN in January 2021.", "We assess model performance on HATECHECK using accuracy, i.e. the proportion of correctly clas-sified test cases.", "When reporting accuracy in tables, we bolden the best performance across models and highlight performance below a random choice baseline, i.e. 50% for our binary task, in cursive red .", "Performance Across Labels All models show clear performance deficits when tested on hateful and non-hateful cases in HATECHECK (Table 2).", "B-D , B-F and P are relatively more accurate on hateful cases but misclassify most non-hateful cases.", "In total, P performs best.", "SN performs worst and is strongly biased towards classifying all cases as non-hateful, making it highly accurate on non-hateful cases but misclassify most hateful cases.", "Evaluating models on each functional test (Table", "1) reveals specific model weaknesses.", "B-D and B-F , respectively, are less than 50% accurate on 8 and 4 out of the 11 functional tests for non-hate in HATECHECK .", "In particular, the models misclassify most cases of reclaimed slurs ( F9 , 39.5% and 33.3% correct), negated hate ( F15 , 12.8% and 12.0% correct) and counter speech ( F20/21 , 26.6%/29.1% and 32.9%/29.8% correct).", "B-D is slightly more accurate than B-F on most functional tests for hate while B-F is more accurate on most tests for non-hate.", "Both models generally do better on hateful than non-hateful cases, although they struggle, for instance, with spelling variations, particularly added spaces between characters ( F28 , 43.9% and 37.6% correct) and leet speak spellings ( F29 , 48.0% and 43.9% correct).", "P performs better than B-D and B-F on most functional tests.", "It is over 95% accurate on 11 out of 18 functional tests for hate and substantially more accurate than B-D and B-F on spelling variations ( F25-29 ).", "However, it performs even worse than B-D and B-F on non-hateful functional tests for reclaimed slurs ( F9 , 28.4% cor-rect), negated hate ( F15 , 3.8% correct) and counter speech ( F20/21 , 15.6%/18.4% correct).", "Due to its bias towards classifying all cases as non-hateful, SN misclassifies most hateful cases and is near-perfectly accurate on non-hateful functional tests.", "Exceptions to the latter are counter speech ( F20/21 , 79.8%/79.4% correct) and non-hateful slur usage ( F8/9 , 33.3%/18.5% correct).", "Performance on Individual Functional Tests Individual functional tests can be investigated further to show more granular model weaknesses.", "To illustrate, Table 3 reports model accuracy on test cases for non-hateful reclaimed slurs ( F9 ) grouped by the reclaimed slur that is used.", "Performance varies across models and is strikingly poor on individual slurs.", "B-D misclassifies all instances of f*g, f*ggot and q*eer.", "B-F and P perform better for q*eer, but fail on n*gga.", "SN fails on all cases but reclaimed uses of b*tch.", "Performance Across Target Groups HATECHECK can test whether models exhibit unin-tended biases' (Dixon et al., 2018) by comparing their performance on cases which target different groups.", "To illustrate, Table 4 shows model accuracy on all test cases created from [IDENTITY] templates, which only differ in the group identifier.", "B-D misclassifies test cases targeting women twice as often as those targeted at other groups.", "B-F also performs relatively worse for women and fails on most test cases targeting disabled people.", "By contrast, P is consistently around 80% and SN around 25% accurate across target groups.", "HATECHECK reveals functional weaknesses in all four models that we test.", "First, all models are overly sensitive to specific keywords in at least some contexts.", "B-D , B-F and P perform well for both hateful and non-hateful cases of profanity ( F10/11 ), which shows that they can distinguish between different uses of certain profanity terms.", "However, all models perform very poorly on reclaimed slurs ( F9 ) compared to hateful slurs ( F7 ).", "Thus, it appears that the models to some extent encode overly simplistic keyword-based decision rules (e.g. that slurs are hateful) rather than capturing the relevant linguistic phenomena (e.g. that slurs can have non-hateful reclaimed uses).", "Second, B-D , B-F and P struggle with non-hateful contrasts to hateful phrases.", "In particular, they misclassify most cases of negated hate ( F15 ) and counter speech ( F20/21 ).", "Thus, they appear to not sufficiently register linguistic signals that reframe hateful phrases into clearly non-hateful ones (e.g. No Muslim deserves to die).", "Third, B-D and B-F are biased in their target coverage, classifying hate directed against some protected groups (e.g. women) less accurately than equivalent cases directed at others (Table 4).", "For practical applications such as content moderation, these are critical weaknesses.", "Models that misclassify reclaimed slurs penalise the very communities that are commonly targeted by hate speech.", "Models that misclassify counter speech undermine positive efforts to fight hate speech.", "Models that are biased in their target coverage are likely to create and entrench biases in the protections afforded to different groups.", "As a suite of black-box tests, HATECHECK only offers indirect insights into the source of these weaknesses.", "Poor performance on functional tests can be a consequence of systematic gaps and biases in model training data.", "It can also indicate a more fundamental inability of the model's architecture to capture relevant linguistic phenomena.", "B-D and B-F share the same architecture but differ in performance on functional tests and in target coverage.", "This reflects the importance of training data composition, which previous hate speech research has emphasised (Wiegand et al., 2019; Nejadgholi and Kiritchenko, 2020).", "Future work could investigate the provenance of model weaknesses in more detail, for instance by using test cases from HATECHECK to inoculate training data (Liu et al., 2019).", "If poor model performance does stem from biased training data, models could be improved through targeted data augmentation (Gardner et al., 2020).", "HATECHECK users could, for instance, sample or construct additional training cases to resemble test cases from functional tests that their model was inaccurate on, bearing in mind that this additional data might introduce other unforeseen biases.", "The models we tested would likely benefit from training on additional cases of negated hate, reclaimed slurs and counter speech.", "Good performance on a functional test in HATECHECK only reveals the absence of a particular weakness, rather than necessarily characterising a generalisable model strength.", "This negative predictive power (Gardner et al., 2020) is common, to some extent, to all finite test sets.", "Thus, claims about model quality should not be overextended based on positive HATECHECK results.", "In model development, HATECHECK offers targeted diagnostic insights as a complement to rather than a substitute for evaluation on held-out test sets of real-world hate speech.", "Each test case in HATECHECK is a separate English-language text document.", "Thus, HATECHECK does not test functionalities related to context outside individual documents, modalities other than text or languages other than English.", "Future research could expand HATECHECK to include functional tests covering such aspects.", "Functional tests in HATECHECK cover distinct expressions of hate and non-hate.", "Future work could test more complex compound statements, such as cases combining slurs and profanity.", "Further, HATECHECK is static and thus does not test functionalities related to language change.", "This could be addressed by live datasets, such as dynamic adversarial benchmarks (Nie et al., 2020; Vidgen et al., 2020b; Kiela et al., 2021).", "Future research could expand HATECHECK to cover additional protected groups.", "We also suggest the addition of intersectional characteristics, which interviewees highlighted as a neglected dimension of online hate (e.g. I17: As a black woman, I receive abuse that is racialised and gendered).", "slurs beyond those covered by HATECHECK .", "Lastly, future research could craft test cases using more platformor community-specific language than HATECHECK 's more general test cases.", "It could also test hate that is more specific to particular target groups, such as misogynistic tropes.", "Targeted diagnostic datasets like the sets of test cases in HATECHECK have been used for model evaluation across a wide range of NLP tasks, such as natural language inference (Naik et al., 2018; McCoy et al., 2019), machine translation (Isabelle et al., 2017; Belinkov and Bisk, 2018) and language modelling (Marvin and Linzen, 2018; Ettinger, 2020).", "For hate speech detection, however, they have seen very limited use.", "Palmer et al. (2020) compile three datasets for evaluating model performance on what they call complex offensive language , specifically the use of reclaimed slurs, adjective nominalisation and linguistic distancing.", "They select test cases from other datasets sampled from social media, which introduces substantial disagreement between annotators on labels in their data.", "Dixon et al. (2018) use templates to generate synthetic sets of toxic and non-toxic cases, which resembles our method for test case creation.", "They focus primarily on evaluating biases around the use of group identifiers and do not validate the labels in their dataset.", "Compared to both approaches, HATECHECK covers a much larger range of model functionalities, and all test cases, which we generated specifically to fit a given functionality, have clear gold standard labels, which are validated by near-perfect agreement between annotators.", "In its use of contrastive cases for model evaluation, HATECHECK builds on a long history of minimally-contrastive pairs in NLP (e.g. Levesque et al., 2012; Sennrich, 2017; Glockner et al., 2018; Warstadt et al., 2020).", "Most relevantly, Kaushik et al. (2020) and Gardner et al. (2020) propose augmenting NLP datasets with contrastive cases for training more generalisable models and enabling more meaningful evaluation.", "We built on their approaches to generate non-hateful contrast cases in our test suite, which is the first application of this kind for hate speech detection.", "In terms of its structure, HATECHECK is most directly influenced by the CHECKLIST framework proposed by Ribeiro et al. (2020).", "However, while they focus on demonstrating its general applicability across NLP tasks, we put more emphasis on motivating the selection of functional tests as well as constructing and validating targeted test cases specifically for the task of hate speech detection.", "In this article, we introduced HATECHECK , a suite of functional tests for hate speech detection models.", "We motivated the selection of functional tests through interviews with civil society stakeholders and a review of previous hate speech research, which grounds our approach in both practical and academic applications of hate speech detection models.", "We designed the functional tests to offer contrasts between hateful and non-hateful content that are challenging to detection models, which enables more accurate evaluation of their true functionalities.", "For each functional test, we crafted sets of targeted test cases with clear gold standard labels, which we validated through a structured annotation process.", "We demonstrated the utility of HATECHECK as a diagnostic tool by testing near-state-of-the-art transformer models as well as two commercial models for hate speech detection.", "HATECHECK showed critical weaknesses for all models.", "Specifically, models appeared overly sensitive to particular keywords and phrases, as evidenced by poor performance on tests for reclaimed slurs, counter speech and negated hate.", "The transformer models also exhibited strong biases in target coverage.", "Online hate is a deeply harmful phenomenon, and detection models are integral to tackling it.", "Typically, models have been evaluated on held-out test data, which has made it difficult to assess their generalisability and identify specific weaknesses.", "We hope that HATECHECK 's targeted diagnostic insights help address this issue by contributing to our understanding of models' limitations, thus aiding the development of better models in the future.", "We thank all interviewees for their participation.", "We also thank reviewers for their constructive feedback.", "Paul Rottger was funded by the German Academic Scholarship Foundation.", "Bertram Vidgen and Helen Margetts were supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1, particularly the Crim-inal Justice System theme within that grant, and the Hate Speech: Measures & Counter-Measures project at The Alan Turing Institute.", "Dong Nguyen was supported by the Digital Society The Informed Citizen research programme, which is (partly) financed by the Dutch Research Council (NWO), project 410.19.007.", "Zeerak Waseem was supported in part by the Canada 150 Research Chair program and the UK-Canada AI Artificial Intelligence Initiative.", "Janet B. Pierrehumbert was supported by EPSRC Grant EP/T023333/1.", "This supplementary section addresses relevant ethical considerations that were not explicitly discussed in the main body of our article.", "Interview Participant Rights All interviewees gave explicit consent for their participation after being informed in detail about the research use of their responses.", "In all research output, quotes from interview responses were anonymised.", "We also did not reveal specific participant demographics or affiliations.", "Our interview approach was approved by the Alan Turing Institute's Ethics Review Board.", "Intellectual Property Rights The test cases in HATECHECK were crafted by the authors.", "As synthetic data, they pose no risk of violating intellectual property rights.", "Annotator Compensation We employed a team of ten annotators to validate the quality of the HATECHECK dataset.", "Annotators were compensated at a rate of 16 per hour.", "The rate was set 50% above the local living wage (10.85), although all work was completed remotely.", "All training time and meetings were paid.", "Rui Cao, Roy Ka-Wei Lee, and Tuan-Anh Hoang.", "2020.", "DeepHate: Hate speech detection via multi-faceted text representations.", "In Proceedings of the 12th ACM Conference on Web Science , pages 1120.", "Allyson Ettinger.", "2020.", "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models.", "Transactions of the Association for Computational Linguistics , 8:3448.", "Paula Fortuna and Sergio Nunes.", "2018.", "A survey on automatic detection of hate speech in text.", "ACM Computing Surveys (CSUR) , 51(4):130.", "Intended Use HATECHECK 's intended use is as an evaluative tool for hate speech detection models, providing structured and targeted diagnostic insights into model functionalities.", "We demonstrated this use of HATECHECK in 3.", "We also briefly discussed alternative uses of HATECHECK , e.g. as a starting point for data augmentation.", "These uses aim at aiding the development of better hate speech detection models.", "Potential Misuse Researchers might overextend claims about the functionalities of their models based on their test performance, which we would consider a misuse of HATECHECK .", "We directly addressed this concern by highlighting HATECHECK 's negative predictive power, i.e. the fact that it primarily reveals model weaknesses rather than necessarily characterising generalisable model strengths, as one of its limitations.", "For the same reason, we emphasised the limits to HATECHECK 's coverage, e.g. in terms of slurs and identity terms." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "method", "other", "other", "objective", "other", "objective", "abstain", "objective", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain" ]
[ "Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks , which are input-agnostic sequences of words added to text processed by classifiers.", "Despite being successful, the word sequences produced in such attacks are often ungrammatical and can be easily distinguished from natural text.", "We develop adversarial attacks that appear closer to natural English phrases and yet confuse classification systems when added to benign inputs.", "We leverage an adversarially regularized autoencoder (ARAE) (Zhao et al., 2018a) to generate triggers and propose a gradient-based search that aims to maximize the downstream classifier's prediction loss.", "Our attacks effectively reduce model accuracy on classification tasks while being less identifiable than prior models as per automatic detection metrics and human-subject studies.", "Our aim is to demonstrate that adversarial attacks can be made harder to detect than previously thought and to enable the development of appropriate defenses.", "1 1 Introduction Adversarial attacks have recently been quite successful in foiling neural text classifiers (Jia and Liang, 2017; Ebrahimi et al., 2018).", "Universal adversarial attacks (Wallace et al., 2019; Behjati et al., 2019) are a sub-class of these methods where the same attack perturbation can be applied to any input to the target classifier .", "These attacks, being input-agnostic, point to more serious shortcomings in trained models since they do not require re-generation for each input.", "However, the attack sequences generated by these methods are often meaningless and irregular text (e.g., zoning tapping fiennes from Wallace et al. (2019)).", "While Equal contribution 1 Our code is available at https://github.com/ Hsuan-Tung/universal_attack_natural_trigger .", "human readers can easily identify them as unnatural, one can also use simple heuristics to spot such attacks.", "For instance, the words in the above attack trigger have an average frequency of 14 compared to 6700 for words in benign inputs in the Stanford Sentiment Treebank (SST) (Socher et al., 2013).", "In this paper, we design natural attack triggers by using an adversarially regularized autoencoder (ARAE) (Zhao et al., 2018a), which consists of an auto-encoder and a generative adversarial network (GAN).", "We develop a gradient-based search over the noise vector space to identify triggers with a good attack performance.", "Our method Natural Universal Trigger Search (NUTS) uses projected gradient descent with l 2 norm regularization to avoid using out-of-distribution noise vectors and maintain the naturalness of text generated.", "2 Our attacks perform quite well on two different classification tasks sentiment analysis and natural language inference (NLI).", "For instance, the phrase combined energy efficiency , generated by our approach, results in a classification accuracy of 19.96% on negative examples on the Stanford Sentiment Treebank (Socher et al., 2013).", "Furthermore, we show that our attack text does better than prior approaches on three different measures average word frequency, loss under the GPT-2 language model (Radford et al., 2019), and errors identified by two online grammar checking tools (scr; che).", "A human judgement study shows that up to 77% of raters find our attacks more natural than the baseline and almost 44% of humans find our attack triggers concatenated with benign inputs to be natural.", "This demonstrates that using techniques similar to ours, adversarial attacks could be made much harder to detect than previously thought and we require the development of appropriate defenses in the long term for securing our NLP models.", "Input-dependent attacks These attacks generate specific triggers for each different input to a classifier.", "Jia and Liang (2017) fool reading comprehension systems by adding a single distractor sentence to the input paragraph.", "Ebrahimi et al. (2018) replace words of benign texts with similar tokens using word embeddings.", "Similarly, Alzantot et al. (2018) leverage genetic algorithms to design word-replacing attacks.", "Zhao et al. (2018b) adversarially perturb latent embeddings and use a text generation model to perform attacks.", "Song et al. (2020) develop natural attacks to cause semantic collisions, i.e. make texts that are semantically unrelated judged as similar by NLP models.", "Universal attacks Universal attacks are input-agnostic and hence, word-replacing and embedding-perturbing approaches are not applicable.", "Wallace et al. (2019) and Behjati et al. (2019) concurrently proposed to perform gradient-guided searches over the space of word embeddings to choose attack triggers.", "In both cases, the attack triggers are meaningless and can be easily detected by a semantic checking process.", "In contrast, we generate attack triggers that appear more natural and retain semantic meaning.", "In computer vision, GANs have been used to create universal attacks (Xiao et al., 2018; Poursaeed et al., 2018).", "Concurrent to our work, Atanasova et al. (2020) design label-consistent natural triggers to attack fact checking models.", "They first predict unigram triggers and then use a language model conditioned on the unigram to generate natural text as the final attack, while we generate the trigger directly.", "We build upon the universal adversarial attacks proposed by Wallace et al. (2019).", "To enable natural attack triggers, we use a generative model which produces text using a continuous vector input, and perform a gradient-guided search over this input space.", "The resulting trigger, which is added to benign text inputs, is optimized so as to maximally increase the loss under the target classification model.", "Problem formulation Consider a pre-trained text classifier F to be attacked.", "Given a set of benign input sequences { x } with the same ground truth label y , the classifier has been trained to predict F ( x ) = y .", "Our goal is to find a single input-Figure 1: Overview of our attack.", "Based on the gradient of the target model's loss function, we iteratively update the noise vector n with small perturbation to obtain successful and natural attack triggers.", "agnostic trigger, t , that when concatenated 3 with any benign input, causes F to perform an incorrect classification, i.e., F ([ t ; x ]) (cid:54) = y , where ; represents concatenation.", "In addition, we also need to ensure the trigger t is natural fluent text.", "Attack trigger generation To ensure the trigger is natural, fluent and carries semantic meaning, we use a pre-trained adversarially regularized autoencoder (ARAE) (Zhao et al., 2018a) (details in Section 4).", "The ARAE consists of an encoder-decoder structure and a GAN (Goodfellow et al., 2014).", "The input is a standard Gaussian noise vector n , which is first mapped to a latent vector z by the generator.", "Then the decoder uses this z to generate a sequence of words in our case, the trigger t .", "This trigger is then concatenated with a set of benign texts { x } to get full attack texts { x (cid:48) } .", "The overall process can be formulated as follows: z = GENERATOR ( n ); t = DECODER ( z ); x (cid:48) = [ t ; x ] We then pass each x (cid:48) into the target classifier and compute the gradient of the classifier's loss with respect to the noise vector, n L ( F ( x (cid:48) ) , y ) .", "Back-propagating through the decoder is not straightforward since it produces discrete symbols.", "Hence, we use a reparameterization trick similar to the trick in Gumbel softmax (Jang et al., 2017) to sample words from the output vocabulary of ARAE model as a one-hot encoding of triggers, while allowing gradient backpropagation.", "Figure 1 provides an overview of our attack algorithm, which we call Natural Universal Trigger Search (NUTS).", "dard multi-variant Gaussian distribution.", "While we can change this noise vector to produce different outputs, simple gradient search may veer significantly off-course and lead to bad generations.", "To prevent this, following Carlini and Wagner (2017), we use projected gradient descent with an l 2 norm constraint to ensure the noise n is always within a limited ball around n 0 .", "We iteratively update n as: n t +1 = B (cid:15) ( n 0 ) [ n t + n t L ( F ( x (cid:48) ) , y )] , (1) where B (cid:15) ( n 0 ) represents the projection operator with the l 2 norm constraint B (cid:15) ( n 0 ) = { n | (cid:107) n n 0 (cid:107) 2 (cid:15) } .", "We try different settings of attack steps, (cid:15) and , selecting the value based on the quality of output triggers.", "In our experiments, we use 1000 attack steps with (cid:15) = 10 and = 1000 .", "Final trigger selection Since our process is not deterministic, we initialize multiple independent noise vectors (256 in our experiments) and perform our updates (1) to obtain many candidate triggers.", "Then, we re-rank the triggers to balance both target classifier accuracy m 1 (lower is better) and naturalness in terms of the average per-token cross-entropy under GPT-2, m 2 (lower is better) using the score m 1 + m 2 .", "We select = 0 .", "05 to balance the difference in scales of m 1 and m 2 .", "We demonstrate our attack on two tasks sentiment analysis and natural language inference .", "We use the method of Wallace et al. (2019) as a baseline 4 and use the same datasets and target classifiers for comparison.", "For the text generator, we use an ARAE model pre-trained on the 1 Billion Word dataset (Chelba et al., 2014).", "For both our attack (NUTS) and the baseline, we limit the vocabulary of attack trigger words to the overlap of the classifier and ARAE vocabularies.", "We generate triggers using the development set of the tasks and report results on test set (results on both sets in Appendix).", "Defense metrics We employ three simple defense metrics to measure the naturalness of attacks: 1. Word frequency: The average frequency of words in the trigger, computed using empirical estimates from the training set of the target classifier.", "4 The baseline attack uses beam search to enlarge the search space in each step.", "We also tried the baseline attack with 256 random initializations followed by selecting the final trigger using the same criterion as our attack, but its attack success/naturalness remained unchanged.", "grammar checkers Scribens (scr) and Chegg (che).", "2. Language model loss: The average per-token cross-entropy loss under a pre-trained language model GPT-2 (Radford et al., 2019).", "3. Automatic grammar checkers: We calculate the average number of errors in the attack sequences using two online grammar checkers Scribens (scr) and Chegg Writing (che).", "Setup We use a 2-layer LSTM (Hochreiter and Schmidhuber, 1997) followed by a linear layer for sentiment predictions.", "The model is trained on the binary Stanford Sentiment Treebank (SST) (Socher et al., 2013), using AllenNLP (Gardner et al., 2018).", "To avoid generating sentiment words in the trigger and directly changing the instance's sentiment, we exclude a list of sentiment words (sen) from the trigger vocabulary, following Wallace et al. (2019).", "Results Table 2 (top half) captures the results of both our attack and the baseline (Wallace et al., 2019).", "Our method is able to reduce the classifier's test accuracy significantly, down to 8.55% in the best attack case.", "Although less successful, our triggers are much more natural, fluent and readable than the baseline.", "Figure 2 shows the difference in statistics between benign text and each attack according to the metrics of word frequency and GPT-2 loss.", "Our generated triggers are much closer in these statistics to the original text inputs than the NUTS (our attack) Baseline (Wallace et al., 2019) Task Trigger Test Trigger text Classifier Trigger text Classifier length data accuracy accuracy SST No trigger + -89.00% -89.00% -82.57% -82.57% 8 + the accident forced the empty windows shut down 26.95% collapses soggy timeout energy energy freshness intellect genitals 15.51% will deliver a deeply affected children from parents 8.55% sunny vitality blessed lifetime lifetime counterparts without pitfalls 2.85% SNLI No trigger + -89.76% -89.76% 0 -86.52% -86.52% -79.83% -79.83% 8 + some black women taking the photo last month 0.00% mall destruction alien whatsoever shark pasture picnic no 0.00% 0 the man drowned in hospital and died in 3.26% cats rounds murder pandas in alien spacecraft mars 0.00% they are helping for training achievement for a 26.78% human humans initiate accomplishment energies near objects near 23.02% Table 2: Attack results on SST and SNLI.", "baseline.", "Further, as shown in Table 1, two grammar checkers (scr; che) report 12.50% and 21.88% errors per word on our attack triggers, compared to 15.63% and 28.13% for the baseline.", "Setup We use the SNLI dataset (Bowman et al., 2015) and the Enhanced Sequential Inference Model (ESIM) (Chen et al., 2017) with GloVe embeddings (Pennington et al., 2014) as the classifier.", "We attack the classifier by adding a trigger to the front of the hypothesis.", "Results From Table 2, we see that both our attack and the baseline decrease the accuracy to almost 0% on entailment and neutral examples.", "On contradiction examples, our attack brings the accuracy down to 26 .", "78% while the baseline decreases it to 23 .", "02% .", "Although less successful, our attacks are much more natural than the baseline.", "In Figure 2, our attacks are closer to the word frequency of benign inputs and even achieve a lower GPT-2 loss than the benign text.", "In Table 1, two grammar checkers (scr; che) also report lower errors on our attacks compared to the baseline.", "To further validate that our attacks are more natural than baseline, we perform a human-subject study on Amazon Mechanical Turk.", "We collect ratings by: (1) providing a pair of our trigger vs baseline trigger (with and without benign text) and asking the worker to select the more natural one; (2) providing a piece of text (our attack text/baseline attack text/benign input) and asking the human to determine whether it is naturally generated or not.", "Both conditions allow the human to choose a Not sure option.", "We generated attack triggers with lengths of 3, 5, and 8 (see Appendix for details) and created 450 comparison pairs for (1) and 675 pieces of text (225 for each type) for (2).", "For each instance, we collect 5 different human judgements and report average scores.", "workers find our attack trigger to be more natural than the baseline while 61 .", "16% judge our attack to be more natural even when concatenated with benign text.", "The other table shows 44 .", "27% human subjects think our attack inputs are naturally generated.", "Although it is lower than the 83 .", "11% for real natural inputs, it is still significantly higher than the 22 .", "84% of baseline attack inputs, which shows that our attacks are more natural and harder to detect than the baseline for humans.", "Similar to Wallace et al. (2019), we also evaluate the attack transferability of our universal adversarial attacks to different models and datasets.", "A transferable attack further decreases the assumptions being made: for instance, the adversary may not need white-box access to a target model and instead generate attack triggers using its own model to attack the target model.", "We first evaluate transferability of our attack across different model architectures.", "Besides the LSTM classifier in Section 4.1, we also train a BERT-based classifier on the SST dataset with 92.86% and 91.15% test accuracy on positive and negative data.", "From Table 4, we can see that the transferred attacks, generated for the LSTM model, lead to 14% 51% accuracy drop on the target BERT model.", "We also evaluate attack transferability across different datasets.", "In addition to the SST dataset in Section 4.1, we train a different LSTM classifier with the same model architecture on the IMDB sentiment analysis dataset, which gets 89.75% and 89.85% test accuracy on positive and negative data.", "Our attacks transfer in this case also, leading to accuracy drops of 18% 34% on the target model (Table 4).", "We developed universal adversarial attacks with natural triggers for text classification and experimentally demonstrated that our model can generate attack triggers that are both successful and appear natural to humans.", "Our main goals are to demonstrate that adversarial attacks can be made harder to detect than previously thought and to enable the development of appropriate defenses.", "Future work can explore better ways to optimally balance attack success and trigger quality, while also investigating ways to detect and defend against them.", "The techniques developed in this paper have potential for misuse in terms of attacking existing NLP systems with triggers that are hard to identify and/or remove even for humans.", "However, our intention is not to harm but instead to publicly release such attacks so that better defenses can be developed in the future.", "This is similar to how hackers expose bugs/vulnerabilities in software publicly.", "Particularly, we have demonstrated that adversarial attacks can be harder to detect than previously thought (Wallace et al., 2019) and therefore can present a serious threat to current NLP systems.", "This indicates our work has a long-term benefit to the community.", "Further, while conducting our research, we used the ACM Ethical Code as a guide to minimize harm.", "Our attacks are not against real-world machine learning systems.", "We are grateful to the anonymous reviewers at NAACL for valuable feedback.", "We thank Austin Wang and Michael Hu for suggestions on using Amazon Mechanical Turk." ]
[ "abstain", "abstain", "objective", "abstain", "method", "objective", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "objective", "method", "method", "result", "result", "result", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "abstain", "objective", "abstain", "method", "other", "other", "other", "objective", "abstain", "other", "abstain", "method", "other", "other", "method", "other", "method", "other", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "other" ]
[ "Transformer-based language models (TLMs), such as BERT, ALBERT and GPT-3, have shown strong performance in a wide range of NLP tasks and currently dominate the field of NLP.", "However, many researchers wonder whether these models can maintain their dominance forever.", "Of course, we do not have answers now, but, as an attempt to find better neural architectures and training schemes, we pretrain a simple CNN using a GAN-style learning scheme and Wikipedia data, and then integrate it with standard TLMs.", "We show that on the GLUE tasks, the combination of our pretrained CNN with ALBERT outperforms the original ALBERT and achieves a similar performance to that of SOTA.", "Furthermore, on open-domain QA (Quasar-T and SearchQA), the combination of the CNN with ALBERT or RoBERTa achieved stronger performance than SOTA and the original TLMs.", "We hope that this work provides a hint for developing a novel strong network architecture along with its training scheme.", "Our source code and models are available at https://github.com/nict-wisdom/bertac.", "Transformer-based language models (TLMs) such as BERT (Devlin et al., 2019), ALBERT (Lan et al., 2020), and GPT-3 (Brown et al., 2020) have shown that large-scale self-supervised pretraining leads to strong performance on various NLP tasks.", "Many researchers have used TLMs for various downstream tasks, possibly as subcomponents of their methods, and/or they have focused on scaling up TLMs or improving their pretraining schemes.", "As a result, other architectures like Recurrent Neural Networks (RNN) (Hochreiter and Schmidhu-ber, 1997; Cho et al., 2014) and Convolutional Neural Networks (CNN) (LeCun et al., 1999) are fading away.", "In this work, we propose a method !", "for improving TLMs by integrating a simple conventional CNN to them. We pretrained this CNN on Wikipedia using a Generative Adversarial Network (GAN) style training scheme ( Goodfellow et al., 2014), and then combined it with TLMs. Oh et al. (2019) similarly used GAN-style training to improve a QA model using a CNN, but their training scheme was applicable only to QA-specific datasets. On the other hand, similarly to TLM, our proposed method for training the CNN is independent of specific tasks. We show that the combination of this CNN with TLMs can achieve higher performance than that of the original TLMs on publicly available datasets for several distinct tasks. We hope that this gives an insight into how to develop novel strong network architectures and training schemes.", "We call our combination of a TLM and a CNNBERTAC (BERT-style TLM with an Adversarially pretrained Convolutional neural network). Its architecture is illustrated in Fig. 1. We do not impose any particular restriction on the TLM in BERTAC, so any TLM, ALBERT (Lan et al.,", "We used the CNN to compute representations of a slightly modified version of the input given to a TLM. To integrate these representations with those of the TLM, we stacked on top of the TLM several layers of Transformers for Integrating External Representation (TIERs) , which are our modified version of normal transformers (Vaswani et al., 2017). A TIER has the same architecture as that of a normal transformer encoder except for its attention: we replace the transformer's self-attention with an attention based on the representation provided by the CNN. We expect that, by keeping the basic architecture of transformer encoders, the CNN's representations can be integrated more effectively with the TLM's original representations.", "We pretrained the CNN using a GAN-style training scheme in order to generate representations of sentences rather freely without the constraint of token embedding prediction in the masked language modeling used for TLMs, as we explain later. For the training, we used masked sentences autogenerated from Wikipedia. As in the masked language modeling, neither human intervention nor downstream task-specific hacking is required. As illustrated in Fig. 2, the GAN-style training requires three networks, namely, a", "discriminator D and two CNN-based generators R and F . Once the training is done, we use the generator F as CNN in BERTAC. The training data consists of pairs of an entity mention and a sentence in which the entity mention is masked with a special token [EM] . For example, the entity-masked sentence m 1 in Table 1 is obtained by masking the entity mention e 1 , Suvarnabhumi Airport, in the original text s 1 . The network F generates a vector representation of the masked sentence ( m 1 ), while R produces a representation of the masked entity ( e 1 ). The discriminator D takes representations generated by either R or F as the input, and it predicts which generator actually gave the representation.", "In the original GAN, a generator learns to generate an artificial image from random noise so that the resulting artificial image is indistinguishable from given real images. By analogy, we used an entity-masked sentence as random noise and a masked entity as a real image.", "In our GAN-style training, we regard the vector representation of a masked entity given by generator R as a real representation of the entity (or the representation of the real image in the above analogy). On the other hand, we regard the representation of the masked sentence, generated by F , as a fake representation of the entity (or the representation of the artificial image generated from the random noise in the above analogy). This representation is deemed fake because the entity is masked in the masked sentence, and F does not know what the entity is exactly. During the training, F should try to deceive the discriminator D by mimicking the real representation and generating a fake representation that is indistinguishable from the real representation of the entity generated by R . On the other hand, R and D , as a team, try to avoid being mimicked by F and also to make the mimic problem harder for F . If everything goes well, once the training is over, F should be able to generate a fake representation of the entity that is similar to its real representation. An interesting point is that F 's output can be interpreted in two ways: it is a representation of a masked sentence because it is computed from the sentence, and at the same time it is a representation of the masked entity because it is indistinguishable from R 's representation of the entity. This duality suggests that F 's output can be seen as a representation of the entire sentence . 2105 We exploit F as a CNN in BERTAC as follows: first, we use F to compute a representation of a masked version of the sentence originally given as input to a TLM. The entity mention to be masked is chosen by simple rules and, if the input consists of multiple sentences, we generate a representation of each (masked) input sentence and concatenate these together into a single one. Then, this representation is integrated to the output of the TLM through multiple TIER layers. Our GAN-style pretraining is conceptually similar to TLM pretraining with masked language modeling (predicting what a masked word in a sentence should be). However, it was designed to pretrain a model that is able to rather freely generate entity representations without strongly sticking to the prediction of token embeddings. Our hypothesis is that such freely generated representations may be useful for improving the performance of downstream tasks. Moreover, we assumed that using multiple text representations computed from different perspectives (i.e., predicting token embeddings and freely generating entity representations) would help to improve the performance of downstream tasks. In our experiments, we show that for the GLUE tasks ( Wang et al., 2018), BERTAC's average performance on the development set was 0.7% higher than that of ALBERT, which was used as a subcomponent of BERTAC, leading to a performance on the test set comparable to that of SOTA (90.3% vs 90.8% (SOTA)). It also outperformed the SOTA method of open-domain QA ( Chen et al. , 2017) on Quasar-T (Dhingra et al., 2017) and SearchQA (Dunn et al., 2017) using either ALBERT or RoBERTa. We also compared our method with alternative models using a CNN pretrained in a self-supervised (non GAN-style) manner to directly predict embeddings of the entity mentions. Consequently, we confirmed that our method worked better: only the CNN trained by our GAN-style pretraining gave significant performance improvement over base TLMs. Note that the computational overhead of BERTAC is reasonably small. It took 20 hours with 16 GPUs to pretrain a single CNN model and 180 hours for the nine models tested with different parameter settings in this work (cf., 480 hours with 96 GPUs for pretraining DeBERTa (He et al., 2021), for example). Moreover, once pretrained, the CNN models can be re-used for various downstream tasks and combined with various TLMs, including potentially future ones. As for the parameter number, BERTAC had just a 14% increase in parameters when ALBERT-xxlarge was used as its base TLM (268 M parameters for BERTAC vs. 235 M for ALBERT-xxlarge). We confirmed from these results that BERTAC could improve pretrained TLMs with reasonably small computational overhead. The code and models of BERTAC are available at https://github.com/nict-wisdom/bertac. 2 Related Work Pretraining TLMs with entity information : There have been attempts to explicitly learn entity representation from text corpora using TLMs (He et al., 2020; Peters et al., 2019; Sun et al., 2020; Wang et al., 2020a; Xiong et al., 2020; Zhang et al., 2019). Our proposed method is a complementary alternative to these existing methods in the sense that entity representations are integrated into TLMs via CNNs and not directly produced by the TLMs. Fine-tuning TLMs with external resources or other NNs : Yang et al. (2019a) and Liu et al. (2020) have used knowledge graphs for augmenting TLMs with entity representations during fine-tuning. Unlike these approaches, BERTAC uses unstructured texts rather than clean structured knowledge, such as knowledge graphs, to adversarially train a CNN. Other previous works have proposed combining CNNs or RNNs with BERT for NLP tasks (Lu et al., 2020; Safaya et al., 2020; Shao et al., 2019; Zhang et al., 2020), but their use of CNNs/RNNs was task-specific, so their models were not directly applicable to other tasks. Adversarial learning for improving TLMs : Oh et al. (2019) proposed a CNN-based answer representation generator for QA that can guess the vector representation of answers from given why-type questions and answer passages. The generator was trained in a GAN-style manner using QA datasets. We took inspiration from their adversarial training scheme to train task-independent representation generators from unsupervised texts (i.e., Wikipedia sentences in which an entity was masked in a cloze-test style). ELECTRA (Clark et al., 2020) also employed an adversarial technique (not a GAN) to pretrain two TLMs: A generator was trained to perform masked language modeling and a discriminator 2106 was trained to distinguish tokens in the training data from tokens replaced by the generator. On downstream tasks, only the discriminator was fine-tuned. In BERTAC, the GAN-style pretraining was applied only to the CNN, thus reducing the training cost. Furthermore, the CNN can be combined easily with any available TLM, even potentially future ones, without having to re-do the pretraining. In this work, we show that BERTAC outperformed ELECTRA on the GLUE task. Vernikos et al. (2020) proposed a method that used an adversarial objective and an adversarial classifier for regularizing the fine-tuning process of TLMs, inspired by adversarial learning for domain adaptation (Ganin et al., 2016). Our work uses a GAN-style training scheme only for pretraining CNNs, not for fine-tuning TLMs. 3 Pretraining of CNNs This section describes the training data and training algorithm for our CNN. 3.1 Training data We pretrained our CNN with an entity-masked version of Wikipedia sentences. WikiExtractor 1 was used to extract, from the English Wikipedia 2 , sentences that have at least one entity mention, i.e., an entity with an internal Wikipedia link. Then we randomly selected one entity mention e i in each sentence and generated an entity-masked sentence m i by replacing the entire selected mention with [EM] . For example, we generated the masked sentence m 1 , [EM] is Thailand's main international air hub, (in Table 1) by replacing the entity mention e 1 , Suvarnabhumi Airport , in the sentence s 1 , ::::::::::::: Suvarnabhumi ::::::: Airport is Thailand's main international air hub, with [EM] . We obtained about 43.3 million pairs of an entity mention and a masked sentence ( f ( e i , m i ) g ) in this way and used 10% of them (randomly sampled) as the pretraining data for our CNN. 3.2 GAN-style pretraining As illustrated in Fig. 2, the adversarial training is done using three subnetworks: R ( real-entity-representation generator ), F ( fake-entity-representation generator ), and D ( discriminator ). R and F are CNNs with average pooling and D 1 https://github.com/samuelbroscheit/wikiextractor-wikimentions 2 We used the September 2020 version. is a feedforward neural network. Once the training is done, we use the generator F as CNN in BERTAC. In the training, we regard the representation of a masked entity output by generator R as a real representation of the entity that the fake-entity-representation generator F should mimic. F is trained so that, taking an entity-masked sentence as its input, it can generate a representation of the masked entity mention (called a fake representation of the entity in this work) that D cannot distinguish from the real representation. The representation generated by F is fake in the sense that the entity mention is masked in the input sentence and F cannot know what it is exactly. As mentioned in the Introduction, our GAN-style pretraining was designed to train a model capable of freely generating entity representations . We assumed that using multiple text representations computed from different perspectives (i.e., prediction of token embeddings in TLMs and generation of entity representations in our CNN) would help to improve the performance of downstream tasks. Algorithm 1: Adversarial Training Scheme Input: Training examples f ( e , m ) g , training epochs t , mini-batch steps b , mini-batch size n Output: Real representation generator R , fake representation generator F , discriminator D 1 j 1 2 Initialize (cid:18) R , (cid:18) F , and (cid:18) D (parameters of R , F , and D ) with random weights 3 while j (cid:20) t do 4 k 1 5 while k (cid:20) b do 6 Sample mini-batch of n examples f ( e i , m i ) g ni =1 7 Generate word embeddings f ( e i , m i ) g ni =1 of the examples. 8 Update D and R by ascending their stochastic gradient: (cid:18) D ;(cid:18) R 1 n n i =1 [log D ( R ( e i )) + log ( 1 (cid:0) D ( F ( m i )) ) ] 9 Update F by descending its stochastic gradient: (cid:18) F 1 n n i =1 log ( 1 (cid:0) D ( F ( m i )) ) 10 k k + 1 11 end 12 j j + 1 13 end For each pair of an entity mention ( e i ) and an entity-masked sentence ( m i ) in the training data, we first generate two matrices of word embeddings e i and m i using word embeddings pretrained on Wikipedia with fastText (Bojanowski et al., 2017). Then, R and F generate, respectively, a 2107 real entity representation from e i and a fake entity representation from m i . Finally, they are given to D , which is a feed-forward network that judges whether F or R generated the representations, i.e., whether the representations are real or fake , using sigmoid outputs by the final logistic regression layer. The pseudo code of the training scheme is given in Algorithm 1 . The training proceeds as follows: R and D as a team try to avoid the possibility that D misjudges F's output (i.e., a fake entity representation) as a real entity representation. More precisely, R and D are trained so that D can correctly judge the representation R ( e i ) given by generator R as real (i.e., D ( R ( e i )) = 1 ) and the representation F ( m i ) given by generator F as fake (i.e., D ( F ( m i )) = 0 ). Therefore, the training is carried out with the objective of maximizing log D ( R ( e i )) + log ( 1 (cid:0) D ( F ( m i )) ) (line 8 in Algorithm 1 ). On the other hand, F tries to generate representation F ( m i ) so that D judges it as real (i.e., D ( F ( m i )) = 1 ). Thus, F is trained to minimize log ( 1 (cid:0) D ( F ( m i )) ) (line 9 in Algorithm 1 ). This minmax game is iterated for the pre-specified t training epochs. 3.3 Pretraining settings We extracted 43.3 million pairs of an entity mention and a masked sentence from Wikipedia and randomly sampled 10% of them to use as training data (4.33 million pairs, around 700 MB in file size). We used word-embedding vectors in 300 dimensions (for 2.5 million words) pretrained on Wikipedia using fastText ( Bojanowski et al., 2017). The embedding vectors were fixed during the training. We set the training epochs to 200 ( t = 200 in Algorithm 1 ) and did not use any early-stopping technique. We chose t = 200 from the results of our preliminary experiments in which we used 10% of the training data and set training epochs t to either of 100, 200, or 300; the loss robustly converged for t = 200 and t = 300 , and thus the earliest point t = 200 was chosen. We used the Rm-sProp optimizer (Tieleman and Hinton, 2012) with a batch size of 4,000 ( n = 4 ; 000 and b = 1 ; 084 in Algorithm 1 ) and a learning rate of 2e-4. We trained nine CNN models with all combinations of the filter's window sizes 2 f 1,2,3, 2,3,4, 1,2,3,4 g and number of filters 2 f 100, 200, 300 g for the generators F and R . All of the weights in the CNNs were initialized using He's method (He et al., 2015). We used a logistic regression layer with sigmoid outputs as discriminator D . The training of a single CNN model took around 20 hours using 16 Nvidia V100 GPUs with 32 GB of memory (180 hours in total for the nine models). We tested all nine CNN models for BERTAC in our GLUE and open-domain QA experiments (Section 5). For each task, the parameters inside the CNNs (as well as the word-embedding vectors) were fixed during the fine-tuning of BERTAC. 4 BERTAC As illustrated in Fig. 1, BERTAC (BERT-style TLM with an Adversarially pretrained Convolutional neural network) incorporates the representation provided by the adversarially pretrained CNN to the representation generated by a TLM. For the integration, we use several layers of TIERs (Trans-formers for Integrating External Representation) stacked on top of the TLM. 4.1 CNN in BERTAC For simplicity, we describe how the CNN is integrated in BERTAC using the task of recognizing textual entailment (RTE) as an example. BERTAC for the RTE task takes two sentences s x and s y as input and predicts whether s x entails s y . First, we explain how the adversarially pretrained CNN (generator F in Section 3.2) generates the representation of the two input sentences. We regard the longest common noun phrase 3 of the two sentences as the entity mention to be masked and create entity-masked sentences m x and m y from s x and s y by masking the noun phrase with [EM] (we use m x = s x and m y = s y if no common noun phrase is found). Then each of the masked sentences m x and m y is given to the CNN. Our expectation here is that the CNN generates similar representations from the masked sentences if they have an entailment relation and that this helps to recognize the entailment relation. Note that the CNN in BERTAC is connected to several TIER layers and that, as shown in Fig. 1, its input is iteratively updated so that it provides updated representations to the TIER layers. Let m ix 2 R j m x j(cid:2) d w and m iy 2 R j m y j(cid:2) d w be the matrices of word embeddings of m x and m y given 3 For single-sentence tasks such as CoLA (Wang et al., 2018), we regard the longest noun phrase in a sentence as an entity. 2108 to the CNN connected to the i -th TIER layer, where d w is the dimension of a word embedding. We denote the representation generated by the CNN when the matrix of word embeddings m was used as the input by CNN ( m ) . The i th TIER layer is given the concatenation of the two CNN representations of m x and m y , r i = [ r ix ; r iy ] 2 R 2 (cid:2) d e , where r ix = CNN ( m ix ) 2 R d e , r iy = CNN ( m iy ) 2 R d e and d e is the dimension of the CNN representation. Note that, for single-sentence tasks, r i = r ix , the CNN representation of m x , is given to the TIER layers. The initial matrices of word embeddings m 1 x and m 1 y are obtained using the fastText word embeddings (Bojanowski et al., 2017), the same as that used in our adversarial learning. Then, the updated input matrices m i +1 x and m i +1 y for the ( i +1) th CNN are obtained from the i -th input matrices m ix and m iy as described below. For the word embedding m i x;j of the j -th word in m x , we compute its bilinear score to r ix ( Sutskever et al., 2009): (cid:22) m ix;j = softmax j ( m i T x B ix r ix ) m ix;j ; where B ix 2 R d w (cid:2) d e is a trainable matrix and softmax j ( v ) denotes the j -th element of the soft-maxed vector of v . The bilinear score indicates how much the corresponding token should be highlighted as one associated with the CNN representation r ix during the update process. We expect that this allows the CNN in the next TIER layer to generate further refined representations with the updated embeddings. We then compute word embeddings m i +1 x in a highway network manner ( Srivastava et al., 2015) as follows: m i +1 x = H x ( (cid:22) m ix ) T x ( m ix )+ m ix (1 (cid:0) T x ( m ix )) ; where H x ( m ix ) = W ih m ix + b ih , T x ( m ix ) = (cid:27) ( W i t m ix + b i t ) , (cid:27) is the sigmoid function, represents the element-wise product, and W ih , W it , b ih , and b it are layer-specific trainable parameters. m i +1 y is also computed from m iy and r iy in the same way. During the fine-tuning of BERTAC for downstream tasks, we fix the parameters of the pretrained CNN but train these parameters for updating CNN's input alongside those of TLMs and TIERs. 4.2 Transformers for integrating external representation (TIERs) As explained in the Introduction, the main difference between a TIER and a normal transformer", "In the TIER attention mechanism, the query representation, which is one of the three inputs of the transformer's self-attention, is replaced with the representation given by the CNN.", "Fig. 3 shows the difference between the TIERs' attention computation and that of normal transformers.", "Attention in normal transformers is computed in the following way: Attention ( Q ; K ; V ) = softmax ( QKT p d k ) V : Q , K , and V are query, key, and value matrices in R l k (cid:2) d k , where l k is the length of an input sequence and d k is a dimension of keys.", "Q , K , and V all come from the same representation of the token sequence provided from the previous transformer layer.", "The attention should specify how much the corresponding tokens in V should be highlighted, so we designed ours in the same way.", "In TIERs, we use the following attention.", "We basically replace the matrix Q with the CNN's representation r 2 R u (cid:2) d k while keeping the original K and V , where u is the number of sentences in the input of the model ( u 2 f 1 ; 2 g in this paper).", "Since r is a matrix with a different size from Q , we needed to adapt the attention computation.", "We first multiply r to KT , and then its soft-maxed results are converted into a l k (cid:2) d k dimensional matrix using the all-one matrix J u;d k 2 R u (cid:2) d k .", "Let the resulting matrix be A = ( softmax ( rK T p d k )) TJ u;d k 2 R l k (cid:2) d k .", "We apply the attention score to V by using the element-wise product between matrices: A V .", "In addition, the actual CNN's representation r CNN 2 R u (cid:2) d e given by our CNNs usually have a size that does not match the size requirement for r .", "Thus, we convert it to r 2 R u (cid:2) d k , a d k column matrix as follows: r = r CNNW + b , where W 2 R d e (cid:2) d k and b are trainable.", "GLUE (Wang et al., 2018) is a multi-task benchmark composed of nine tasks including two single-sentence tasks (CoLA and SST-2) and seven two-sentence tasks of similarity/paraphrase tasks (MRPC, QQP, and STS-B) and natural language inference tasks (MNLI, QNLI, RTE, and WNLI).", "Following the previous work of ALBERT (Lan et al. , 2020), we performed single-task fine-tuning for each task under the following settings: single-model for the development set and ensemble for test set submissions.", "As in Liu et al. (2019) and Lan et al. (2020), we report the performance on the development set for each task by averaging over five runs with different random initialization seeds.", "As in Lan et al. (2020), for test set submissions, we fine-tuned the models for the RTE, STS-B, and MRPC tasks by initializing them with the fine-tuned MNLI single-task model, and we also used task-specific modification for CoLA and WNLI to improve scores (see Appendix A for de-tails).", "We explored ensemble settings between 6 and 30 models per task for our test set submission.", "We used ALBERT-xxlarge-v2 (Lan et al., 2020) as the pretrained TLM.", "As hyperparameters for BERTAC, for each task we tested learning rates 2 f 8e-6, 9e-6, 1e-5, 2e-5, 3e-5 g , a linear warmup for the first 6% of steps followed by a linear decay to 0, a maximum sequence length of 128, and all nine CNNs pretrained with different filter settings.", "We set the batch size to 128 for MNLI and QQP and 16 for the other tasks.", "Furthermore, we trained our model with the following set of training epochs: f 1,2,3,4,5 g for MNLI, QQP, and QNLI, f 6,7,8,9,10 g for CoLA, MRPC, RTE, SST-2, and STS-B, and f 90,95,100,105,110 g for WNLI.", "We set the number of TIER layers to 3 after preliminary experiments.", "See Table 9 in Appendix B for a summary of the hyperparameters tested in the GLUE experiments.", "During the fine-tuning of BERTAC, the parameters inside the CNNs (as well as word embeddings of fastText) were fixed as explained in Section 3.3, while those used to update the input to the CNNs were optimized.", "For each task, we selected the pretrained CNN (out of nine) and the BERTAC hyperparameters that gave the best performance on the development data.", "Table 2 shows the results of eight tasks on the GLUE development set: all of them are single-model results.", "Our BERTAC consistently outperformed the previous TLM-based models over seven tasks, except for QQP, and, as a result, showed the best average performance on the development set.", "Crucially, our model improved the average performance around 0.7% over ALBERT, the base TLM in our model.", "This indicates the effectiveness of adversarially trained CNNs and TIERs in BERTAC.", "The test set results obtained from the GLUE leaderboard are summarized in Table 3.", "Our model showed comparable performance to SOTA, DeBERTa/TuringNLRv4, and achieved state-of-the-art results on 3 out of 9 task.", "It also showed better performance than ALBERT, our base TLM, in most tasks.", "To investigate whether our GAN-style pretraining of CNNs contributed to the performance improvement, we also tested the following alternative training schemes for the CNN used in BERTAC.", "Self-supervised CNN: We pretrained the CNN to generate representations of a masked sentence in a self-supervised way as follows: For an entity mention e and an entity-masked sentence m in the training data (Section 3.1), the CNN generates a representation r from the masked sentence trying to minimize MSE (mean squared error) between r and the entity mention's representation e (average word embedding of all tokens in e ).", "Randomly initialized CNN: We did not pretrained the CNNs, but trained them alongside the TLMs during the fine-tuning of BERTAC (the CNNs were randomly initialized).", "We trained both the self-supervised and randomly initialized CNNs using the same hyperpa-rameter settings as GAN-style CNNs (see Section 3.3).", "We confirm from the results in Table 4 2110 Models MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg.", "that only the proposed method with our GAN-style CNNs showed a higher average score than ALBERT.", "This suggests the effectiveness of our GAN-style pretraining scheme of CNNs.", "We also tested BERTAC on open-domain QA ( Chen et al., 2017) with the publicly available datasets Quasar-T (Dhingra et al., 2017) and SearchQA ( Dunn et al., 2017).", "We used the pre-processed version 4 of the datasets provided by Lin et al. (2018), which contains passages retrieved for all questions, and followed their data split as described in Table 5.", "We implemented our QA model following the approach of Lin et al. (2018), which combines a passage selector to choose relevant passages from retrieved passages and an answer span selector to identify the answer span in the selected passages.", "For the given question q and the set of retrieved passages P = f p i g , we computed the probability P r ( a j q; P ) of extracting answer span a to question q from P in the following way, and then we extracted the answer span ^ a with the highest probability: P r ( a j q; P ) = i P r ( a j q; p i ) P r ( p i j q; P ) ; 4 Available at https://github.com/thunlp/OpenQA where P r ( p i j q; P ) and P r ( a j q; p i ) are computed by the passage selector and answer span selector, respectively.", "We input [CLS] question [SEP] passage [SEP] to both the passage selector and answer span selector, where [CLS] and [SEP] are special tokens.", "In the passage selector, the representation of [CLS] in the top TIER layer is fed into a linear layer with a softmax, which computes the probability that the passage contains a correct answer to the question.", "Our BERTAC answer span selector identifies answer spans from passages by computing start and end probabilities of each token in passages, where we feed the representation of each token in the top layer of TIERs to two linear layers, each with a softmax for the probabilities (Devlin et al., 2019).", "5.2.2 Training details for open-domain QA We used all nine pretrained CNNs, as in the GLUE experiments.", "As pretrained TLMs, we used ALBERT-xxlarge-v2 (Lan et al., 2020) and RoBERTa-large (Liu et al., 2019).", "We set the learning rate to 1e-5, the number of epochs to 2, the maximum sequence length to 384, and the number of TIER layers to 3.", "We used a linear warmup for the first 6% of steps followed by a linear decay to 0 with a batch size of 48 for Quasar-T and 96 for SearchQA.", "We tested all of the pretrained CNNs and chose for each dataset the one that maximizes EM (the percentage of the predictions matching exactly one of the ground truth an-2111 CNNs used in BERTAC MNLI QNLI QQP RTE SST-2 MRPC CoLA STS-B Avg.", "See Table 10 in Appendix B for a summary of the hyperparameters tested for open-domain QA.", "We compared BERTAC with the previous works described in Table 6.", "Table 7 shows the performance of all of the methods.", "The subscripts of the TLM-based methods represent the type of pretrained TLM used by each method.", "All the methods were evaluated using EM and F1 score (average overlap between the prediction and gold answer).", "BERTAC ALBERT-xxlarge outperformed all of the baselines including the SOTA method (CFORMER ) on both EM and F1.", "BERTAC RoBERTa-large in the same TLM setting as the SOTA method showed a better performance than SOTA except for F1 in Quasar-T.", "These results suggest that our framework is effective for QA tasks as well.", "TIER, which uses ALBERT-xxlarge alone without using our CNN and TIER, w/o GAN-style CNN, which does not use our CNN pretrained by the GAN-style training scheme but uses self-supervised CNNs (the same as used in the GLUE experiments, see Table 4), w/o update, which does not perform layer-wise update of the CNN inputs.", "The results in Table 8 suggest that all of the following contributed to the performance improvement: the combination of TLMs and GAN-style CNNs, our GAN-style training of CNNs, and the layer-wise update of the CNN inputs.", "We proposed BERTAC (BERT-style TLM with an Adversarially pretrained Convolutional neural net-work), a combination of a TLM and a CNN, where the CNN was pretrained using a novel GAN-style training scheme and masked sentences obtained automatically from Wikipedia.", "Using this CNN, we improved the performance of standard TLMs.", "We confirmed that BERTAC could achieve comparable performance with the SOTA and outperformed the base TLM used as a subcomponent of BERTAC in the GLUE task.", "We also show that BERTAC outperformed the SOTA method of open-domain QA on Quasar-T and SearchQA." ]
[ "abstain", "abstain", "result", "result", "abstain", "objective", "other", "abstain", "abstain", "abstain", "objective", "other", "abstain", "method", "method", "result", "result", "other", "abstain", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "result", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "objective", "result", "result", "result" ]
[ "In this paper, we propose a mixture model-based end-to-end method to model the syntactic-semantic dependency correlation in Semantic Role Labeling (SRL).", "Semantic dependencies in SRL are modeled as a distribution over semantic dependency labels conditioned on a predicate and an argument word.", "The semantic label distribution varies depending on Shortest Syntactic Dependency Path (SSDP) hop patterns.", "We target the variation of semantic label distributions using a mixture model, separately estimating semantic label distributions for different hop patterns and probabilistically clustering hop patterns with similar semantic label distributions.", "Experiments show that the proposed method successfully learns a cluster assignment reflecting the variation of semantic label distributions.", "Modeling the variation improves performance in predicting short distance semantic dependencies, in addition to the improvement on long distance semantic dependencies that previous syntax-aware methods have achieved.", "The proposed method achieves a small but statistically significant improvement over baseline methods in English, German, and Spanish and obtains competitive performance with state-of-the-art methods in English.", "1 1 Introduction Semantic Role Labeling (SRL) answers an essential question about sentence semantics: [Who] [does what] [to whom].", "A core problem of SRL is identifying semantic dependencies that specify the semantic role of arguments in relation to predicates (He et al., 2018; Kasai et al., 2019).", "For example, [who] (argument) is the agent (semantic role) to [does what] (predicate).", "Semantic dependency parsers (Dozat and Manning, 2018a) identify semantic dependencies by giving a distribution over semantic dependency labels (denoted as semantic label distribution) for all predicate-argument pairs.", "1 Our code is available at this repository.", "In this paper, we propose a mixture model (Pear-son, 1894) based semantic dependency parser for SRL where we target the dependence of semantic label distributions on Shortest Syntactic Dependency Path (SSDP) patterns.", "SSDP is the shortest path connecting a predicate-argument pair in a syntactic dependency tree.", "Bunescu and Mooney (2005) and Cai et al. (2016) claim that SSDP encodes most information about bigram relations, such as the semantic dependency.", "Indeed, previous research (He et al., 2018; Xia et al., 2019) shows that modeling the correlation between SSDPs and semantic dependencies is crucial for building a high-performance SRL system.", "Semantic label distributions vary depending on SSDPs, even when the SSDPs connect predicate-argument pairs with the same surface words.", "Figure 1 shows an example where two predicate-argument pairs have different semantic dependency labels while sharing the same surface words.", "SSDP patterns help discriminate semantic labels between the two pairs.", "The example indicates the dependence of semantic label distributions on SSDP patterns.", "We propose a mixture model-based method (Figure 2) to model the dependence in two steps: (1) Separately estimating semantic label distributions for different SSDP patterns as component distributions, and (2) Probabilistically clustering SSDP patterns with similar semantic label distributions using a mixture weight.", "The mixture model estimates the semantic label distribution by aggregating the component distributions using the mixture weight.", "We focus on SSDP hop patterns in this paper as we observed a drastic variation in semantic label distributions for different hop patterns through the change in mutual information (Shannon et al., 1949) (Sec-tion 2).", "We evaluate the proposed method using the CoNLL-2009 dataset (Hajic et al., 2009) 2 , the most popular multi-lingual SRL dataset with parallel syntactic and semantic dependency annotations.", "Experiments show that the proposed method correctly learns a mixture weight reflecting the variation in semantic label distributions.", "Modeling the variation improves performance in predicting short distance semantic dependencies in addition to long distance dependencies that previous syntax-aware methods (He et al., 2018; Roth and Lapata, 2016; Strubell et al., 2018) improve only on.", "Previous syntax-aware methods improve their performance on long distance dependencies at the expense of the performance on short distance dependencies.", "In comparison, the proposed method makes no such compromise, improving its performance over semantic dependencies of all ranges.", "In general, the proposed method obtains a small but statistically significant improvement over baseline methods in English, German, and Spanish and achieves competitive performance with state-of-the-art methods in English.", "2 While being similar, the CoNLL-2009 dependency format (Surdeanu et al., 2008) predates the well-known universal dependency format (Nivre et al., 2017) and have fundamental differences.", "This work follows the CoNLL-2009 format for both syntactic and semantic dependencies.", "in semantic label distributions for different SSDP hop patterns, (2) proposing a mixture model-based method capturing the variation, and (3) conducting a detailed experiment evaluating the proposed method.", "As mentioned in Section 1, SSDP affects the choice of semantic dependency labels.", "We study the impact of SSDP hop patterns on semantic label distributions through the change in mutual information (Shannon et al., 1949) in this section.", "We observe a drastic change in mutual information only for hop patterns that frequently co-occur with semantic dependencies.", "SSDP is the path connecting a predicate-argument pair in a syntactic dependency tree.", "Its hop pattern describes the number of transitions needed to transit from the predicate to the argument.", "We denote the hop pattern by ( , ) , where is the number of dependent-to-head transitions and is the number of head-to-dependent transitions.", "In a syntactic dependency tree, syntactic dependencies are arcs pointing from syntactic heads to syntactic dependents.", "The head-to-dependent transition moves in the same direction as the syntactic dependencies, whereas the dependent-to-head transition moves in the opposite direction.", "In Figure 1, the SSDP connecting eliminate and it consists of a dependent-to-head transition moving from eliminate to will, and a head-to-dependent transition moving from will to it.", "The hop pattern of this SSDP is (1, 1).", "We denote the syntactic random variable for hop patterns as X and the semantic random variable for semantic labels as Y .", "X maps predicate-argument 7960 word pairs ( p s , a s ) in a sentence s to their hop patterns, whereas Y maps the pairs to their semantic labels.", "Their mutual information MI( X, Y ) measures the reduction in uncertainty about Y after knowing X .", "High mutual information indicates relatively low uncertainty in the conditional distribution PY | X .", "To highlight the impact of hop patterns on semantic label distributions, we compare the mutual information of two ideal models, a syntax-aware model ( X ( , ) , Y ) and a syntax-agnostic model ( X 0 , Y ) .", "We define the syntactic variables X ( , ) and X 0 as Equation 1 and 2.", "This definition makes the variable X ( , ) sensitive only to the hop pattern ( , ) and X 0 blind to any hop pattern information.", "We define the mutual information gain of ( , ) as the difference in mutual information between the syntax-aware model and the syntax-agnostic model (Equation 3).", "X ( , ) ( p s , a s ) = (cid:40) 1 , ( p s , a s ) is of ( , ) 0 , otherwise (1) X 0 ( p s , a s ) = 0 (2) MI( X ( , ) , X 0 ) = MI( X ( , ) , Y ) MI( X 0 , Y ) (3) Figure 3 reports the mutual information gain of each hop pattern using the English training set of the CoNLL-2009 dataset.", "The figure shows that different hop patterns have drastically varying mutual information gains.", "A sharp spike of mutual information gain occurs in the hop pattern (0, 1) with a gain value of 0.149 bits, indicating a strong impact of the hop pattern (0, 1) on semantic label distributions.", "Hop patterns with relatively short transitions have non-zero gains ranging from 0.011 bits to 0.149 bits, which indicates the degree of impact differs drastically.", "These hop patterns frequently co-occur with semantic dependencies (He et al., 2018).", "On the other hand, hop patterns co-occurring rarely with semantic dependencies have long transitions.", "These hop patterns have near-zero mutual information gains in Figure 3, which indicates the weak impact of the patterns.", "The varying degree of impact motivates the separate estimation of semantic label distributions for different hop patterns.", "The amount of hop patterns with a weak impact motivates the clustering of hop patterns that share similar semantic label distributions.", "In this section, we present background information about syntactic and semantic dependency parsing", "and mixture models.", "We also present a brief survey about syntax-aware SRL methods using SSDP information and compare the proposed method with the previous methods.", "Both syntactic and semantic dependencies describe bigram relations between words, namely heads and dependents.", "The heads and the dependents correspond to syntactic heads and dependents in syntactic dependencies and predicates and arguments in semantic dependencies.", "The similarity suggests that a mechanism, such as the biaffine parser (Dozat and Manning, 2017, 2018b), can capture the two dependencies.", "For semantic dependencies, the biaffine parser estimates a distribution P ( r | p, a ) over relations r R (cid:83) { } between a predicate p and an argument a .", "R denotes the set of semantic relation labels, and denotes no relation occurring between p and a .", "For syntactic dependencies, the biaffine parser estimates a distribution P ( h | d ) , predicting the syntactic head h of the syntactic dependent d .", "Neural biaffine parsers estimate the two distributions as Equation 4 3 , 5, 6 4 , and 7.", "e p , e a , e h and e d denote the feature vectors of p , a , h and d from a sentence encoder.", "r ( e p , e a ) = e Tp ( W r 1 ) e a + w r 2 ([ e p ; e a ]) + b r (4) P ( r | p, a ) = Softmax([ r ( e p , e a )]) (5) ( e h , e d ) = e Th ( W 1 ) e d + w 2 ([ e h ; e d ]) + b (6) P ( h | d ) = Softmax([ ( e h , e d )]) (7) 3.2 Mixture Model and Latent Variable Model Mixture models assume data to be generated from a mixture distribution whose component distributions belong to the same distributional family, such as the Gaussian distributions, but possess distinct parameters.", "The mixture of component distributions grants additional flexibility to the mixture model.", "For example, the Gaussian mixture model can capture multi-mode phenomena as opposed to the simple Gaussian model (Bishop and Nasrabadi, 2007).", "A mixture model contains two core variables: an observable data variable 3 W r 1 is a weight matrix, w r 2 is a weight vector, and b r is a bias term for estimating the unnormalized probability score of P ( r | p, a ) .", "4 Similarly, W 1 , w 2 , and b are parameters for estimating the unnormalized score of P ( h | d ) .", "X and a latent variable C indexing the component distribution that generates the data.", "The mixture model computes the marginal likelihood P ( x ) := (cid:80) c P ( x | c ) P ( c ) by aggregating its component distributions P ( x | c ) using the mixture weight P ( c ) .", "The optimal parameter (i.e., the mixture weight and the parameters of component distributions) can be estimated by maximizing the log-likelihood log P ( x ) .", "However, direct maximum likelihood estimation on the marginal log-likelihood is intractable for mixture models (Murphy, 2012), and the conventional Expectation-Maximization algorithm (Dempster et al., 1977) requires finding optimal parameters at each iteration.", "Variational Inference (Xu et al., 2015; Ba et al., 2015) maximizes a variational lowerbound of the log-likelihood (Equation 8), simultaneously optimizing the component distributions and the mixture weight.", "L = (cid:88) c q ( c | x ) log P ( x | c ) P ( c ) q ( c | x ) (8) = log P ( x ) KL( q ( c | x ) || P ( c | x )) (9) 3.3 Syntactic Dependency Information in Semantic Dependency Parsing Inspired by the close connection of syntactic and semantic dependencies, He et al. (2018), Roth and Lapata (2016), and Shi et al. (2020) attempt to build high-performance SRL systems using SSDP information.", "While the research improves performance over syntax-agnostic methods, their methods either require language-specific hyperparameters or exhibit a behavior challenging to interpret.", "The pruning method (He et al., 2018, 2019) is readily interpretable but requires language-specific hyperparameters.", "The method utilizes a statistical bias that most SSDPs rarely co-occur with semantic dependencies.", "It eliminates predicate-argument pairs of the infrequent SSDPs using heuristics.", "Whether an SSDP can co-occur with semantic dependencies is hardcoded in heuristics, making the method highly interpretable.", "However, the heuristics are language-specific, requiring manual tuning for every language.", "The neural methods (Roth and Lapata, 2016; Foland and Martin, 2015) are more language-independent but suffer from limited interpretability.", "The methods implicitly encode SSDP information using neural network encoders.", "Roth and Lapata (2016) and Foland and Martin (2015) encode SSDPs in a continuous embedding using an Long-Short Term Memory (LSTM) model or a Convolutional Neural Network model.", "Shi et al. (2020) jointly learns SSDP and semantic dependency information using a Transformer (Vaswani et al., 2017) by merging SSDP information with semantic dependency labels.", "The research reports performance improvements in one or more languages.", "However, interpreting the model's behavior is challenging.", "Neural encoders, such as the LSTM model in Roth and Lapata (2016), project SSDPs in a high-dimensional space.", "The high-dimensional space has a complex structure, rendering clustering analyses based on Euclidean distances less effective.", "Roth and Lapata (2016) interprets the behavior of their model using the clustering analysis, suggesting that their model captures many linguistic phenomena.", "However, the linguistic phenomena are fragmental and limited to a few syntactic constructions.", "In contrast, the proposed method is generic like the neural methods and interpretable like the pruning method.", "The proposed method optimizes its parameters using gradients of the back-propagated errors, which makes the proposed method more language-independent.", "As a result, the proposed method learns a mixture weight reflecting the impact of SSDP hop patterns on semantic label distributions, enabling analyses using the mixture weight.", "In this section, we present the proposed mixture model-based semantic dependency parser to model the dependence of semantic label distributions on SSDP hop patterns.", "In Section 2, we discussed the need to separately estimate semantic label distributions for different hop patterns and the need to cluster hop patterns sharing similar semantic label distributions.", "The proposed parser estimates semantic label distributions for different hop patterns using the component distributions and clusters hop patterns using the mixture weight of a mixture model.", "Figure 2 illustrates the model architecture of the proposed method.", "The model contains a conventional biaffine parser for syntactic dependencies and a mixture model-based biaffine parser for semantic dependencies.", "The syntactic parser provides a syntactic dependency tree from which the clustering component extracts hop patterns and determines the mixture weights.", "The biaffine parsers in 7962 the semantic parser estimate the component distributions.", "The semantic parser computes the semantic label distribution by aggregating the component distributions using the mixture weight.", "The syntactic and the semantic parser share a backbone sentence encoder, a Transformer model in our implementation.", "We jointly optimize the parameters of the syntactic and the semantic parser by optimizing the log-likelihood of the syntactic dependencies and a variational lowerbound of the log-likelihood (ELBo) of the semantic dependencies.", "We use the lowerbound as an approximation to the log-likelihood for inference because we find it works best in predicting semantic dependencies.", "We expand on the training objective of the semantic parser.", "The objective is to maximize the likelihood P ( r | p, a ) of the observed semantic label r conditioned on the predicate p and the argument a .", "We rewrite the likelihood as a marginal of the joint likelihood P ( r, c | p, a ) where c is the index of the component distributions.", "The joint likelihood can be decomposed as Equation 12 where the former term corresponds to the component distributions and the latter term corresponds to the mixture weight.", "Since we are interested in separating semantic label distributions by hop patterns, we replace the term P ( c | p, a ) with P ( c | ssdp( p, a )) where ssdp( p, a ) maps predicate-argument pairs to their hop patterns.", "P ( c | ssdp( p, a )) also serves as the variational approximation q ( c | r, p, a ) because we assume the hop pattern, together with the predicate-argument pair, determines the semantic dependency label.", "This assumption removes the need to condition the component index c on the semantic label r in the variational approximation q .", "In this implementation, we encode hop patterns with orthogonally initialized embeddings and estimate the mixture weight of a hop pattern by applying a multi-layer perceptron followed by a softmax layer to the embedding.", "log P ( r | p, a ) (10) = log (cid:88) c P ( r, c | p, a ) (11) = log (cid:88) c P ( r | c, p, a ) P ( c | p, a ) (12) = log (cid:88) c P ( r | c, p, a ) P ( c | ssdp( p, a )) (13) (cid:88) c P ( c | ssdp( p, a )) log P ( r | c, p, a ) (14) = L sem ( |X sem = ( r, p, a )) (15) Equation 16 depicts the full objective of the proposed model.", "It consists of a log-likelihood objective of the syntactic parser (Equation 17) and the ELBo objective of the semantic parser (Equation 15).", "G syn stands for the set of all syntactic dependencies ( h, d ) , whereas G sem stands for the set of all semantic dependencies ( r, p, a ) .", "J ( ) = (cid:88) ( h,d ) G syn L syn ( |X syn = ( h, d )) + (cid:88) ( r,p,a ) G sem L sem ( |X sem = ( r, p, a )) (16) L syn ( |X syn = ( h, d )) = log P ( h | d ) (17) 5 Experiment In this section, we present experimental results for the proposed method.", "We call the proposed method as MM (mixture-model) in this section.", "We use the labeled attachment score (LAS) (Haji c et al., 2009) as the primary metric.", "LAS is a micro-F1 score measuring how well a model recovers semantic dependencies.", "We conduct our experiments comparing MM with five baseline methods (Table 1) using the CoNLL-2009 dataset.", "We perform the comparison on all languages using the corresponding development sets.", "Each model will run using four randomly generated seeds to mitigate the impact of the seeds.", "We also compare the semantic scores (Hajic et al., 2009) of MM with state-of-the-art syntax-aware methods using the English test set.", "The semantic score is a micro-F1 score evaluating models' performance in the predicate identification in addition to the semantic dependency recovery.", "We use preidentified predicates extracted from the mate-tools (Bjrkelund et al., 2010), following the evaluation method of Roth and Lapata (2016).", "We evaluate MM using three word embeddings: a non-context-sensitive embedding, FastText (Joulin et al., 2016), and two context-sensitive embeddings, ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019).", "When comparing with state-of-the-art methods, we report results on the GloVe (Pennington et al., 2014) and the FastText embedding.", "However, the result on the FastText embedding is for reference only because the state-of-the-art methods report results mainly on the GloVe embedding.", "We use an 8-layer Transformer as the backbone encoder for MM and baseline models.", "We set the batch size to 5000 words, the maximum size that a P100 device can accommodate.", "We use the Adam optimizer (Kingma and Ba, 2015) with 7963 Method Name Syn Description Transformer No A Transformer model (Vaswani et al., 2017) using a biaffine semantic dependency parser Multitask Yes A Transformer model using two biaffine parsers for syntactic and semantic dependencies LISA Yes The Linguistically-Informed Self-Attention model (Strubell et al., 2018) PathLSTM Yes A Multitask model using dependency path embeddings (Roth and Lapata, 2016) Pruning Yes A Multitask model using the pruning technique (He et al., 2018) MM Yes A Multitask model using the proposed mixture model-based semantic dependency parser Table 1: Descriptions of baseline methods.", "parameters lr = 4 e 6 , 1 = 0 .", "9 , and 2 = 0 .", "98 for training.", "We set the number of component distributions k in MM to 5 for all languages.", "We find that this number works for most languages in a preliminary experiment exploring k = 1 , 3 , 5 , 7 , 10 .", "For k > 5 , some components will not be assigned to any hop pattern, resulting in a waste of model parameters.", "For k < 5 , some components are forced to estimate semantic label distributions for hop patterns of different nature, resulting in a loss of performance.", "We do not perform back-propagation between the syntactic and the semantic parser in MM because we found the back-propagation causes negative impacts on the two parsers.", "We find that MM significantly improves over baseline methods on the English development set.", "Figure 4 reports the LAS of MM and baseline methods using box plots.", "MM achieves better LAS than baseline methods in all three embeddings.", "We conduct a series of significance tests against a null hypothesis that MM performs equally to each baseline method.", "The p-values of the hypothesis tests are shown in Table 2.", "Each cell in the table shows the p-value of a test comparing MM with a baseline method (shown in the row) on an embedding (shown in the column).", "The table suggests that MM significantly outperforms all baseline methods on the three embeddings, except the PathLSTM method on the FastText embedding.", "The significance test confirms the effectiveness of MM in modeling semantic dependencies.", "We find that MM learns a mixture weight reflecting the impact of hop patterns on semantic label distributions.", "Table 3 reports the component assignment extracted from the learned mixture weight.", "We extract the component assignment for hop patterns up to (5, 3).", "Most evidently, MM consistently assigns the hop pattern (0, 1) to a unique component in all three embeddings.", "This behavior agrees with our findings in Section 2 that the hop pattern has the highest mutual information gain.", "MM also consistently assigns hop patterns with near-zero mutual information gains to a single component.", "Moreover, MM clusters hop patterns with similar non-zero gains to a single component.", "These results suggest that semantic label distributions of different hop patterns have unique properties.", "MM is readily applicable for other languages beyond English using the same hyperparameter setting.", "Figure 5 reports the comparison of MM with the Transformer and the Multitask method on the development sets of German, Spanish, Catalan, Chinese, and Czech.", "The Multitask method consistently outperforms the Transformer method in all languages.", "In comparison, MM significantly outperforms the Multitask method in German and Spanish.", "MM also has an arguable improvement over the Multitask method in Catalan.", "In Chinese, MM performs similarly to the Multitask method but better than the Transformer method.", "In Czech, MM somehow fails to learn and achieves a considerably low LAS.", "We might need to tune the architecture or hyperparameters here, while MM stably outperforms the baseline methods in other languages.", "Using the Transformer method as a baseline, we find MM improves on both short and long distance semantic dependencies, whereas syntax-aware baseline methods improve only on long dis-WSJ Brown GloVe P R F1 P R F1 Zhou et al. (2020) 88.73 89.83 89.28 82.46 83.2 82.82 Li et al. (2019) 87.8 88 87.9 77 76.8 76.9 He et al. (2018) 89.7 89.3 89.5 81.9 76.9 79.3 Roth and Lapata (2016) 88.1 85.3 86.7 76.9 73.8 75.3 Kasai et al. (2019) 89 88.2 88.6 78 77.2 77.6 MM 91.03 90.13 90.58 80.59 79.21 79.83 MM (FastText) 91.16 90.19 90.71 83.93 82.64 83.28 ELMo P R F1 P R F1 Li et al. (2019) 90.5 92.1 91.3 81.7 81.9 81.8 Kasai et al. (2019) 90.3 90 90.2 81 80.5 80.8 Cai and Lapata (2019) 91.7 90.8 91.2 83.2 81.9 82.5 Lyu et al. (2019) -91 -82.2 Chen et al. (2019) -91.1 -82.7 MM 92.21 91.45 91.82 86.51 85.30 85.90 BERT P R F1 P R F1 Shi and Lin (2019)(base) 92.1 91.9 92 85.6 84.7 85.1 Shi and Lin (2019)(large) 92.4 92.3 92.4 85.7 85.8 85.7 Zhou et al. (2020) 91.21 91.19 91.2 85.65 86.09 85.87 MM 92.33 91.77 92.05 87.00 85.98 86.32 Table 4: Semantic scores of MM and state-of-the-art methods on the English test set.", "tance dependencies.", "To illustrate the finding, we group the semantic dependencies by their linear length 5 and evaluate the methods' performance on each group.", "We group the semantic dependencies into four bins: the short-distance bin (0-2) and the long-distance bins (3-5, 6-8, 9-inf).", "We then compute the relative performance score 6 of each syntax-aware method using the model with the median LAS score.", "Figure 6 reports the relative scores of LAS, Precision, and Recall.", "MM has the best relative LAS among syntax-aware methods in predict-5 l = | idx p idx a | , where l is the linear length of the SSDP connecting the predicate p and the argument a .", "idx p and idx a represents the index of the predicate and the argument in the sentence.", "6 s r = s syn + s syn , where s r represents the relative score, s syn + represents the score of the syntax-aware model, and s syn represents the score of the syntax-agnostic Transformer model.", "ing short distance dependencies.", "On the FastText and the ELMo embedding, MM is the only method scoring a positive relative LAS (i.e., MM is the only method improving over the Transformer method).", "The reason is that MM achieves significantly better precision than baseline syntax-aware methods, which allows MM to overcome the lower recall.", "Meanwhile, MM has a performance improvement similar to the baseline syntax-aware methods in predicting long distance dependencies.", "MM achieves competitive performance with state-of-the-art syntax-aware methods.", "Table 4 reports the median semantic scores of MM and the reported scores of state-of-the-art methods on the English test set.", "The test set contains two sections: WSJ (in-domain) section and Brown (out-of-domain) section.", "MM achieves the best performance on the WSJ section on the GloVe and the ELMo embedding and performs comparably to other methods on the BERT embedding.", "MM also scores the best performance on the Brown section on the ELMo and the BERT embedding.", "We also find that MM on the FastText embedding performs better than MM on the GloVe embedding.", "This result is in line with a study evaluating non-context-sensitive word embeddings (Wang et al., 2019) where the FastText embedding outperforms the GloVe embedding on downstream NLP tasks.", "This paper presented a mixture model-based method for syntax-aware semantic dependency parsing in SRL.", "The method models the dependence of semantic label distributions on SSDP patterns.", "We focused on SSDP hop patterns because we observed a drastic variation in semantic label distributions through the change in mutual information.", "The proposed method successfully learned a mixture weight reflecting the variation.", "The method improved performance in predicting both short and long distance semantic dependencies, whereas baseline syntax-aware methods improved only on long distance dependencies.", "The method outperformed baseline methods by a small 7966 but statistically significant margin in many languages.", "Moreover, the proposed method achieved performance competitive with state-of-the-art methods in English.", "Nonetheless, hop patterns contain only limited information about SSDP.", "In the future, we plan to apply the proposed method to more informative SSDP patterns, such as labeled SSDP patterns.", "This research was supported by JST, CREST Grant Number JPMJCR2114, Japan." ]
[ "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "other" ]
[ "We report on methods to create the largest publicly available parallel corpora by crawling the web, using open source software.", "We empirically compare alternative methods and publish benchmark data sets for sentence alignment and sentence pair filtering.", "We also describe the parallel corpora released and evaluate their quality and their usefulness to create machine translation systems.", "Parallel corpora are essential for building high-quality machine translation systems and have found uses in many other natural language applications, such as learning paraphrases (Ban-nard and Callison-Burch, 2005; Hu et al., 2019) or cross-lingual projection of language tools (Yarowsky et al., 2001).", "We report on work to create the largest publicly available parallel corpora by crawling hundreds of thousands of web sites, using open source tools.", "The processing pipeline consists of the steps: crawling, text extraction, document alignment, sentence alignment, and sentence pair filtering.", "We describe these steps in detail in Sections 48.", "For some of these steps we evaluate several methods empirically in terms of their impact on machine translation quality.", "We provide the data resources used in these evaluations as benchmarks for future research.", "As part of these effort, several open source components have been developed.", "These are integrated into the open-source tool Bitextor, 1 a highly modular pipeline that allows harvesting parallel corpora from multilingual websites or from preexisting or historical web crawls such as the one available as part of the Internet Archive.", "2 1 https://github.com/bitextor/bitextor 2 https://archive.org/ The execution of the pipeline has focused on of-ficial European Union languages, but also targeted Russian, Sinhala, Nepali, Tagalog, Swahili, and Somali.", "We show that the obtained parallel corpora improve state-of-the-art results on common benchmarks, such as the WMT Shared Task on News Translation.", "While the idea of mining the web for parallel data has been already pursued in the 20th cen-tury (Resnik, 1999), the most serious efforts have been limited to large companies such as Google (Uszkoreit et al., 2010) and Microsoft (Rarrick et al., 2011), or targeted efforts on specific domains such as the Canadian Hansards and Europarl (Koehn, 2005).", "The book Bitext Alignment (Tiedemann, 2011) describes some of the challenges in greater detail.", "Most publicly available parallel corpora are the result of targeted efforts to extract the translations from a specific source.", "The FrenchEnglish Canadian Hansards 3 were used in the earliest work on statistical machine translation.", "A similar popular corpus is Europarl (Koehn, 2005), used throughout the WMT evaluation campaign.", "Multi-lingual web sites are attractive targets.", "Rafalovitch and Dale (2009); Ziemski et al. (2015) extract data from the United Nations, Tager (2011) from European Patents, Lison and Tiedemann (2016) from a collection of TV and movie subtitles.", "Cettolo et al. (2012) explain the creation of a multilingual parallel corpus of subtitles from the TED Talks website which is popular due to its use in the IWSLT evaluation campaign.", "There are also various efforts targeted at a single language pair.", "Martin et al. (2003) build a parallel corpus for InuktitutEnglish.", "Utiyama and Isahara (2003); Fukushima et al. (2006) worked on creating JapaneseEnglish corpora.", "Uchiyama and Isahara (2007) report on the efforts to build a JapaneseEnglish patent corpus and Macken et al. (2007) on efforts on a broad-based DutchEnglish corpus.", "Li and Liu (2008) mine the web for a ChineseEnglish corpus.", "A large CzechEnglish corpus from various sources was collected (Bojar et al., 2010), linguistically annotated (Bojar et al., 2012), and has been continuously extended to over 300 million words (Bojar et al., 2016).", "All these efforts rely on methods and implementations that are quite specific for each use case, not documented in great detail, and not publicly available.", "A discussion of the pitfalls during the construction of parallel corpora is given by Kaalep and Veskis (2007).", "A large collection of corpora is maintained at the OPUS web site 4 (Tiedemann, 2012).", "Document alignment can be defined as a matching task that takes a pair of documents and computes a score that reflects the likelihood that they are translations of each others.", "The task is typically limited to a single web domain (all web pages from www.aaa.com and aaa.com , possibly aaa.de but not bbb.com ) for efficiency.", "Matching may take the HTML structure into account, or purely rely on the textual content.", "Examples of structural matching is the use of edit-distance between linearized documents (Resnik and Smith, 2003) and probability of a probabilistic DOM-tree alignment model (Shi et al., 2006).", "Using the URL for matching is a very powerful indicator for some domains, typically by using a predefined set of patterns for language marking or simple Levenshtein distance (Le et al., 2016).", "Content matching requires crossing the language barrier at some point, typically by using bilingual dictionaries or translating one of the documents into the other document's language (Uszkoreit et al., 2010).", "Documents may be represented by vectors over word frequencies, typically td-idf-weighted.", "Vectors may also be constructed over bigrams (Dara and Lin, 2016) or even higher order n-grams 4 http://opus.lingfil.uu.se/ (Uszkoreit et al., 2010).", "The vectors are then typically matched with cosine similarity (Buck and Koehn, 2016a).", "The raw vectors may be re-centered around the mean vector for a web domain (Germann, 2016) Document alignment quality can be improved with additional features such ratio of shared links, similarity of link URLs, ratio of shared images, binary feature indicating if the documents are linked, DOM structure similarity (Espl`a-Gomis et al., 2016), same numbers (Papavassiliou et al., 2016), or same named entities (Lohar et al., 2016).", "Guo et al. (2019) introduce the use of document embeddings, constructed from sentence embeddings, to the document alignment task.", "Early sentence aligners (Brown et al., 1991; Gale and Church, 1993) use scoring functions based only on the number of words or characters in each sentence and alignment algorithms based on dynamic programming.", "Europarl, for example, used metadata to align paragraphs, typically consisting of 2-5 sentences, and using Gale and Church (1993)'s method to align sentences within corresponding paragraphs.", "Later work added lexical features and heuristics to speed up search, such as limiting the search space to be near the diagonal (Moore, 2002; Varga et al., 2005).", "More recent work introduced scoring methods that use MT to get both documents into the same language (Sennrich and Volk, 2010) or use pruned phrase tables from a statistical MT system (Gomes and Lopes, 2016).", "Both methods anchor high-probability 11 alignments in the search space and then fill in and refine alignments.", "They later propose an extension (Sennrich and Volk, 2011) in which an SMT system is bootstrapped from an initial alignment and then used in Bleualign.", "Vecalign (Thompson and Koehn, 2019) is a sentence alignment method that relies on bilingual sentence embeddings and achieves linear run time with a coarse-to-fine dynamic programming algorithm.", "Parallel corpora that have been crawled from un-verified web sites and processed by error-prone extraction and alignment methods are likely to contain noise, such as random text fragments, text in the wrong language, translations produced by machine translation tools or bad translators, and", "misaligned sentence pairs.", "Such noise is specially harmful for neural machine translation (Khayral-lah and Koehn, 2018), so filtering it out is an essential processing step.", "There is a robust body of work on filtering out noise in parallel data but most recently this topic has gained a lot of momentum, partly due to the lack of robustness of neural models and fostered by recent shared tasks on parallel corpus filtering under high-resource (Koehn et al., 2018) and low-resource data conditions (Koehn et al., 2019).", "Most participants in these shared tasks used three components: pre-filtering rules, scoring functions for sentence pairs, and a classifier that learned weights for feature functions.", "Pre-filtering rules.", "Some of the training data can be discarded based on simple deterministic filtering rules.", "This may remove over 80% of the data (Kurfal and Ostling, 2019; Soares and Costa-juss`a, 2019).", "Such rules remove too short or too long sentences, sentences that have too few words (tokens with letters instead of just special characters), either absolute or relative to the total number of tokens, sentences whose average token length is too short or too long, sentence pairs with mismatched lengths in terms of number of tokens, sentence pairs where names, numbers, dates, email addresses, URLs do not match between both sides, sentence pairs that are too similar, indicating simple copying instead of translating, and sentences where language identifier do not detect the required language.", "Scoring functions.", "Sentence pairs that pass the pre-filtering stage are assessed with scoring functions which provide scores that hopefully correlate with quality of sentence pairs.", "Participants used a variety of such scoring functions, including n -gram or neural language models on clean data (Rossenbach et al., 2018), language models trained on the provided raw data as contrast, neural translation models (Junczys-Dowmunt, 2018), bag-of-words lexical translation probabilities (Gonzalez-Rubio, 2019), or even existing off-the-shelf tools like Zipporah and Bicleaner (Chaudhary et al., 2019).", "Learning weights for scoring functions.", "Given a large number of scoring functions, simply averaging their resulting scores may be inadequate.", "Learning weights to optimize machine translation system quality is computationally intractable due to the high cost of training these systems to evaluate different weight settings.", "A few participants used instead a classifier that learns how to distinguish between good and bad sentence pairs (where bad sentence pairs are either synthesized by scrambling good sentence pairs or selected from the raw crawled data).", "A novel method that was central to the best-performing submission in WMT 2019 was the use of cross-lingual sentence embeddings that were directly trained from parallel sentence pairs (Chaudhary et al., 2019).", "Other submissions used monolingual word embeddings (Soares and Costa-juss`a, 2019; Kurfal and Ostling, 2019; Bernier-Colborne and Lo, 2019).", "Another approach is to first train a translation system on the clean data, then use it to translate the non-English side into English and use monolingual matching methods to compare it against the English side of the parallel corpus.", "Different matching metrics were used: METEOR (Erdmann and Gwinnup, 2019), Levenshtein distance (Sen et al., 2019), or BLEU (Parcheta et al., 2019), As Rarrick et al. (2011) point out, one type of noise in parallel corpora extracted from the web are translations that have been created by machine translation.", "Venugopal et al. (2011) propose a method to watermark the output of machine translation systems to aid this distinction, with a negligible loss of quality.", "Antonova and Misyurev (2011) report that rule-based machine translation output can be detected due to certain word choices, and statistical machine translation output can be detected due to lack of reordering.", "Rarrick et al. (2011) train a classifier to learn the distinction and show that removing such data leads to better translation quality.", "Our work exploits web sites that provide roughly the same content in multiple languages, leading us to the assumption to find pairs of web pages which are translations of each other, with translated sentences following the same order.", "This assumption does not hold in less consistently translated web content such as Wikipedia, or accidental parallel sentence found in news stories about the same subject matter written in multiple languages.", "There have been increasing efforts to mine sentence pairs from large pools of multi-lingual text, which are treated as unstructured bags of sentences.", "Munteanu and Marcu (2005) use document retrieval and a maximum entropy classifier to identify parallel sentence pairs in a multi-lingual collection of news stories.", "Bilingual sentence embeddings (Guo et al., 2018) and multilingual sentence embeddings (Artetxe and Schwenk, 2018) were tested on their ability to reconstruct parallel corpora.", "This lead to work to construct WikiMatrix, a large corpus of parallel sentences from Wikipedia (Schwenk et al., 2019) based on cosine distance of their cross-lingual sentence embeddings.", "Since the start of the collection effort in 2015, we identified potential web sites to crawl in various ways, but mainly by exploiting statistics from CommonCrawl.", "By splitting this large collection of crawled web pages by web domain and running text extraction and language identification (Buck et al., 2014), we can extract statistics on what language content exists on each of them.", "Web domains with sufficient content in a targeted language and English are selected for crawling.", "The thresholds of what constitutes sufficient content varied depending on language.", "Typically, we require minimum amounts of content in the targeted language and English (measured in bytes of text), and consider the ratio between the two.", "For instance, we identified 19,616 web domains with at least 100KB of content in German and English (max ratio 10), but only 438 web domains with at least 20KB of content in Maltese and English (max ratio 10).", "It is worth noting that by targeted crawling of web sites we are able to collect many more web pages than present in CommonCrawl.", "In an exploratory study, only 5% of a collection of web pages with useful content were found in CommonCrawl.", "This may have improved with recent more extensive crawls by CommonCrawl but there is still a strong argument for targeted crawling.", "Crawling is the initial step of the pipeline.", "It entails downloading documents from a number of websites and looking for any documents that contain text.", "These documents are stored as single or multi-domain Web ARChive (WARC) files.", "WARC is an archiving format for crawled data originally proposed by the Internet Archive Figure 1: Workflow diagram of Bitextor and developed by a consortium of libraries and archives into the ISO 28500:2009 standard (ISO, 2009).", "It consists of a list of gzip-compressed records, each comprising a header with metadata and a crawled document.", "HTTrack 5 Well-known multi-platform tool for crawling.", "It has been for long time in Bitextor, even though it is now deprecated as the support for the tool is discontinued.", "Heritrix 6 Internet Archive's web crawler; it is fully compatible with WARC format and supports 5 https://www.httrack.com/ 6 https://github.com/internetarchive/ heritrix3 a variety of options that make it one of the most suitable options for large scale data crawling.", "Creepy 7 Python library with basic resources for crawling.", "A crawler has been implemented on top of it, and is currently experimental.", "Wget One of the most popular tools for retrieving files through HTTP and HTTPS in Unix systems.", "It is fully compatible with WARC format.", "Most of our crawling in ParaCrawl has been done using HTTrack.", "To deal with the I/O-intensive process of writing small files with high frequency, data is first stored on local SSD drives and then transferred to a network file system for subsequent processing.", "After crawling, all documents are pre-processed to extract and normalize the text and identify their language.", "The resulting cleaned and sorted text is the input for the subsequent steps of document and segment alignment (see Sections 6 and 7).", "Conversion to HTML WARC files contain one web-crawled document per record.", "The documents can be in a variety of formats that contain text: plain text, HTML, Open Document Format 8 ( .odt ), Office Open XML 9 ( .docx ) or PDF files containing text.", "With the exception of the small number of documents that are already in plain text format, the bitextor-warc2htmlwarc.py module converts any of these formats to HTML (see fig. 1) and produces WARC files containing only HTML or plain text documents.", "Text extraction from HTML Given WARC files containing HTML, we extract the text content.", "We preserve sentence breaks indicated by HTML tags such as <p> or <br> (paragraph and line break), but remove formatting tags such as <b> (for bold text) without a trace.", "Language identification with cld2 and text extraction are currently performed by Python module bitextor-warc2preprocess.py ; as text extraction is a rather intensive operation, an alternative workflow uses an experimental module written in the Go language, giawarc .", "Using bilingual lexica The traditional workflow in Bitextor until version 5 used bilingual lexica.", "Module bitextor-buildidx.py builds indexes of documents containing, for each word in the lexicon for each language, the documents containing it.", "Then bitextor-idx2ridx uses the bilingual lexica to translate these words and build reverse indexes where each document is paired to a list of documents and bag-of-words-based overlap scores in the other language.", "A series of modules ( bitextor-urlscomparison.py , bitextor-urlsetoverlap.py , bitextor-imagestooverlap.py , etc.), compute a series of features for each language direction based on mutual linking and the comparison of document URLs, the set of outgoing URLs, HTML structure and image content; these features are integrated by bitextor-rank.py into two new reverse-index file with new scores, which are used to obtain the final document alignment.", "Using machine translation This workflow uses machine translation to decide whether two documents have to be aligned, and is the one that has been used for the parallel data releases of the project (Buck and Koehn, 2016b).", "After extract-lett.py extracts plain-text documents in each language, a machine translation system translates each document from language A to B .", "We then generate a (sparse) matrix of tf-idf scores between machine translated versions of documents in language A and documents in language B .", "These scores are used by compute_matches.py to compute a list of document pairs (score, source URL, target URL).", "Document pairs are stored in a file in which each line contains the URLs of both documents and their plain-text content encoded in base64.", "During the ParaCrawl project, we made use of a few sentence alignment tools.", "In this paper, we compare their performance on five language pairs.", "The sentence aligners are: Hunalign (Varga et al., 2005) is a widely used tool that relies on a bilingual dictionary that we Language Web Document English Domains Pairs Tokens German 21,806 17,109,018 10,788,923,009 Czech 12,179 6,661,650 4,089,806,440 Hungarian 5,560 2,770,432 1,504,698,348 Estonian 5,129 2,301,309 1,427,328,440 Maltese 933 303,198 134,232,546 Table 1: Corpus statistics for data used in the sentence alignment evaluation.", "generated from the Europarl corpus or other available parallel corpora.", "Bleualign (Sennrich and Volk, 2010) aligns an English translation of the foreign sentences and the English sentences based on their similarity, as measured by a variant of the BLEU score.", "We implemented a faster version of Bleualign in C++.", "Vecalign (Thompson and Koehn, 2019) is a new sentence aligner based on sentence embeddings, using an efficient coarse-to-fine algorithm with linear run time.", "We used pre-trained LASER embeddings 10 which cover all the languages of ParaCrawl, except for Irish.", "We compared the quality of the sentence pairs extracted from document pairs for these tools.", "To our knowledge, this is the first evaluation of sentence aligners on large-scale real-world web-crawled data.", "We selected five languages, ranging from low resource (Maltese) over mid-resource (Estonian, Hungarian) to high-resource (Czech, German).", "We selected a subset of web domains, for details see Table 1.", "The data is provided as document pairs from the usual upstream ParaCrawl processing.", "The text of web pages needs to be further split into sentences, and then aligned using the different sentence aligners.", "The resulting sentence pairs are deduplicated are assessed for quality using Bicleaner (more on sentence pair filtering in the next section).", "Since different sentence aligners generate different amounts of data (for instance, Bleualign filters quite aggressively for noise), we selected differently sized subsets of the data for evaluation by selecting the best sentence pairs according to Bicleaner quality scores.", "We built neural machine translation models on these subsets using 10 https://engineering.fb.com/ai-research/ laser-multilingual-sentence-embeddings/ Language Hunalign Vecalign Bleualign German 35.1 (100m) 35.8 (150m) 35.0 (100m) Czech 21.0 (50m) 21.2 (50m) 21.0 (50m) Hungarian 16.5 (30m) 16.8 (30m) 16.6 (15m) Estonian 21.8 (20m) 21.6 (20m) 21.4 (20m) Maltese 33.5 (5m) 34.1 (7m) 30.3 (2m) Table 2: BLEU scores for systems trained on corpora generated by different sentence aligners.", "Fairseq and evaluated them on test sets drawn from the WMT news translation task (newstest2018 for German, Czech, Estonian; newstest2009 for Hungarian) and the EU Bookshop 11 corpus (Maltese).", "See Table 2 for the BLEU scores and corpus sizes for the best-performing subsets for each sentence aligner and language.", "Vecalign gives the best results for 4 of the languages, and is slightly behind Hunalign for Estonian.", "Our processing pipeline is aimed at high recall at the cost of precision, thus creating large but very noisy corpora.", "So, as a last processing step, we aim to filter out sentence pairs that are not useful as training data for machine translation or any other purpose.", "This is especially important since training on noisy corpora is a challenge for neural machine translation which motivated the organization of two shared tasks in 2018 and 2019, on the high resource language GermanEnglish and the low resource languages Sinhala and Nepali, respectively.", "Here, we extend this evaluation to European languages with medium sized resources.", "Building on the data sets generated by the sentence alignment evaluation of the previous section, we compared three sentence pair filtering methods used in the ParaCrawl effort: Zipporah (Xu and Koehn, 2017), Bicleaner (Sanchez-Cartagena et al., 2018), and LASER (Chaudhary et al., 2019).", "We carried out the evaluation (see Table 3) in the same fashion, as in the previous section.", "Filtering by LASER scores gives the best results except for Maltese (for which the publicly available 11 http://opus.nlpl.eu/EUbookshop.php 12 http://www.statmt.org/ paracrawl-benchmarks/ Setup Zipporah Bicleaner LASER de, Hunalign 34.4 (100m) 35.1 (100m) 36.0 (100m) de, Vecalign 34.6 (100m) 35.8 (100m) 36.3 (50m) cs, Hunalign 19.1 (15m) 21.0 (50m) 22.2 (30m) cs, Vecalign 21.4 (30m) 21.2 (50m) 22.2 (30m) hu, Hunalign 16.2 (10m) 16.5 (30m) 17.2 (10m) hu, Vecalign 16.9 (15m) 16.8 (30m) 17.2 (15m) et, Hunalign 21.2 (15m) 21.8 (20m) 22.1 (15m) et, Vecalign 21.3 (20m) 21.6 (20m) 22.9 (20m) mt, Hunalign 32.8 (5m) 33.5 (7m) 32.6 (7m) mt, Vecalign 33.8 (5m) 34.1 (5m) 30.2 (7m) Table 3: BLEU scores for systems trained on subsets of the data selected by different sentence pair filtering methods.", "LASER model has not been trained).", "Moreover, in almost all settings, we achieve better results with Bicleaner than Zipporah.", "Overall, the ParaCrawl corpus release v5.0 contains a total of 223 million filtered 13 , unique sentence pairs from around 150k website domains and across 23 EU languages with English (see Table 5).", "However, the data release is highly im-balanced with 73% of sentence pairs comprising of just five languages: French, German, Spanish, Italian and Portuguese.", "The average (untokenised) English sentence length (over all languages) is 22.9 words, with some notable anomalies.", "For example, the low-resourced Irish-English pair (27.6 words) has over 50% of sentence pairs originating from the legal domain, where sentences are longer than usual.", "Furthermore, we noticed that filtered sentences which had been aligned using Hunalign were significantly shorter than those aligned by Bleualign (26.1 and 20.1 words respectively), although we are unsure of the exact reason for this discrepancy.", "Our main motivation for creating the ParaCrawl corpus is to improve the quality of machine translation systems.", "To test this, we trained neural machine translation models where we added the corpus to existing data sets for language pairs that were tackled in the shared task on news translation at the Conference on Machine Translation (WMT) which we consider a strong baseline.", "13 Sentence pairs with a Bicleaner score of less than 0.7 were discarded, but remain in the RAW release.", "We trained Transformer-Base models with Marian using SentencePiece.", "See Table 4 for results.", "For most language pairs, we see gains of several BLEU points (up to 6 BLEU points for English Romanian).", "We even see gains for EnglishCzech, were ParaCrawl is quite a bit smaller than existing data sets (+0.7 BLEU when adding 5.3m sentence pairs to the existing set of 52m sentence pairs).", "Several of the steps involved in producing and evaluating the ParaCrawl corpora are computationally expensive.", "Even as some of the steps are embarrassingly parallel and amenable processing in a high-performance computing setting, even pre-processing of 100TB of source data to produce candidate documents consumes on the order of 50,000 CPU-hours equivalent to an estimated 15 720kWh of power.", "Training of a neural network model for translating one of the more resource-rich languages such as German may take a week on a dozen GPUs again consuming about 750kWh.", "Translating 500 million German sentences to English for evaluation consumed roughly 7MWh.", "In practice, these computations are not simply performed once, they are performed many times as parameters are changed and different strategies tried.", "This energy cost is significant.", "The Typical Domestic Consumption Values published by 15 The datasheet of an Intel E5-2695 processor says that it uses 115W of power or about 9.5W/core.", "This estimate includes a 50% margin for main board power and other overhead.", "sentence pair filtering step.", "Clean has been proven to be useful for training machine translation systems.", "We release the raw corpus to allow use of other filtering methods, or different thresholds for quality cutoffs.", "Ofgem 16 , the UK energy regulator, say that a high-consuming household with electric heating is expected to consume 7.1MWh/year.", "Does an increase of one or two BLEU points justify this cost?", "For ParaCrawl, we argue that yes, it does, because we are producing an enabling data set whose cost will, we hope, be amortised across many future experiments.", "But there is a more general point to be made here: it is not currently the practice in the machine translation community to publish figures about the cost involved in achieving an increase in performance as measured with the standard metrics.", "It is not straightforward to evaluate when or if we, as a community, have reached a point of diminishing returns where small changes to a family of methods consume an ever-increasing amount of resources yielding only marginal improvements.", "We therefore suggest adopting a practice of disclosing energy use for experiments in machine translation alongside BLEU scores to make the 16 https://www.ofgem.gov.uk/electricity/ retail-market/monitoring-data-and-statistics/typical-domestic-consumption-values cost-benefit trade-off explicit.", "We released the largest publicly available parallel corpora for many language pairs and demonstrated their benefit to train machine translation systems.", "Going beyond providing data, the goals of this project include the creation of publicly available infrastructure to explore new research directions on parallel corpus mining by releasing open source code for the entire pipeline and public benchmarks for individual processing steps.", "Each of the processing steps we describe here still have great potential for improvement, and we hope that our work contributes to the development of novel methods both in terms of better processing of raw parallel data sources, but also increasing the robustness of neural machine translation training when faced with noisy data.", "We are especially interested in further extending this work into low resource languages where resources tend to be noisier and underlying models to support data mining less reliable.", "This work has been supported in part by three projects funded by the Connecting Europe Facility of the European Union ( paracrawl.eu ), two Google Faculty Research Awards to Philipp Koehn, a Mozilla research grant to Kenneth Heafield, and a donation from eBay to Kenneth Heafield.", "Hosting is provided by the AWS Public Dataset Program.", "This work was performed using resources provided by the Cambridge Service for Data Driven Discovery (CSD3) operated by the University of Cambridge Research Computing Service (http://www.csd3.cam.ac.uk/), provided by Dell EMC and Intel using Tier-2 funding from the Engineering and Physical Sciences Research Council (capital grant EP/P020259/1), and DiRAC funding from the Science and Technology Facilities Council (www.dirac.ac.uk).", "This paper is the authors' opinion and not necessarily that of the funders." ]
[ "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "other", "result", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "other", "objective", "objective", "objective", "objective", "other", "other", "other", "other" ]
[ "Model ensemble techniques often increase task performance in neural networks; however, they require increased time, memory, and management effort.", "In this study, we propose a novel method that replicates the effects of a model ensemble with a single model.", "Our approach creates K -virtual models within a single parameter space using K -distinct pseudo-tags and K -distinct vectors.", "Experiments on text classification and sequence labeling tasks on several datasets demonstrate that our method emulates or outperforms a traditional model ensemble with 1 /K -times fewer parameters.", "A model ensemble is a promising technique for increasing the performance of neural network models (Lars. and Peter., 1990; Anders and Jesper, 1994).", "This method combines the outputs of multiple models that are individually trained using the same training data.", "Recent submissions to natural language processing(NLP) competitions are primarily composed of neural network ensembles (Bojar et al., 2018; Barrault et al., 2019).", "Despite its effectiveness, a model ensemble is costly.", "Because it handles multiple models, it requires increased time for training and inference, increased memory, and greater management effort.", "Therefore, the model ensemble technique cannot always be applied to real systems, as many systems, such as edge devices, must work with limited computational resources.", "In this study, we propose a novel method that replicates the effects of the ensemble technique with a single model.", "Following the principle that aggregating multiple models improves performance, we create multiple virtual models in a shared space.", "Our method virtually inflates the training data K times with K -distinct pseudo-tags [Tag 1] I watched this", "appended to all input data.", "It also incorporates K distinct vectors, which correspond to pseudo-tags.", "Each pseudo-tag k { 1 , . . . , K } is attached to the beginning of the input sentence, and the k -th vector is added to the embedding vectors for all tokens in the input sentence.", "Fig. 1 presents a brief overview of our proposed method.", "Intuitively, this operation allows the model to shift the embedding of the same data to the k -th designated subspace and can be interpreted as explicitly creating K virtual models in a shared space.", "We thus expect to obtain the same (or similar) effects as the ensemble technique composed of K models with our K virtual models generated from a single model.", "Experiments in text classification and sequence labeling tasks reveal that our method outperforms single models in all settings with the same parameter size.", "Moreover, our technique emulates or surpasses the normal ensemble with 1 /K -times fewer parameters on several datasets.", "The neural network ensemble is a widely studied method (Lars. and Peter., 1990; Anders and Jesper,", "1994; Hashem, 1994; Opitz and Shavlik, 1996); however studies have focused mainly on improving performance while ignoring cost, such as computational cost, memory space, and management cost.", "Several methods have overcome the shortcomings of traditional ensemble techniques.", "For training Snapshot Ensembles, (Huang et al., 2017) used a single model to construct multiple models by converging into multiple local minima along the optimization path.", "For inference distillation, (Hin-ton et al., 2015) transferred the knowledge of the ensemble model into a single model.", "These methods use multiple models either during training or inference, which partially solves the negative effects of the traditional ensemble.", "The incorporation of pseudo-tags is a standard technique widely used in the NLP community, (Rico et al., 2016; Melvin et al., 2017).", "However, to the best of our knowledge, our approach is the first attempt to incorporate pseudo-tags as an iden-tification marker of virtual models within a single model.", "The most similar approach to ours is dropout (Srivastava et al., 2014), which stochastically omits each hidden unit during each mini-batch, and in which all units are utilized for inference.", "Huang et al. (2017) interpreted this technique as implicitly using an exponential number of virtual models within the same network.", "As opposed to dropout, our method explicitly utilizes virtual models with a shared parameter, which is as discussed in Section 5, complementary to dropout.", "The target tasks of this study are text classification and sequence labeling.", "The input is a sequence of tokens (i.e., a sentence).", "Here, x t denotes the one-hot vector of the t -th token in the input.", "Let E RD |V| be the embedding matrices where D is the dimension of the embedding vectors and V is the vocabulary of the input.", "We obtain the embedding vector e t at position t by e t = Ex t .", "Here, we introduce the notation e 1: T to represent the list of vectors ( e 1 , e 2 , . . . , e T ) that correspond to the input sentence, where T is the number of tokens in the input.", "Given e 1: T , the feature (or hidden) vectors h t RH for all t { 1 , . . . , T } are computed as an encoder neural network ENC ( ) , where H denotes the dimensions of the feature vector.", "Namely, h 1: T = ENC ( e 1: T ) .", "Finally, the output (cid:98) y given input x 1: T is estimated as (cid:98) y = ( h 1: T ) where ( ) represents the task dependent function (e.g., a softmax function for text classification and a conditional random field layer for sequence labeling).", "It should be noted that the form of the output (cid:98) y differs depending on the target task.", "In this section, we introduce the proposed method, which we refer to as SINGLEENS .", "Fig. 1 presents an overview of the method.", "The main principle of this approach is to create different virtual models within a single model.", "We incorporate pseudo-tags and predefined distinct vectors .", "For the pseudo-tags, we add special tokens { (cid:96) k } Kk =1 to the input vocabulary, where hyper-parameter K represents the number of virtual models.", "For the predefined distinct vectors, we leverage mutually orthogonal vectors { o k } Kk =1 , where the orthogonality condition requires satisfying o k o k (cid:48) (cid:39) 0 for all ( k, k (cid:48) ) when k (cid:54) = k (cid:48) .", "Finally, we assume that all input sentences start from one of the pseudo-tags.", "We then add the corresponding orthogonal vector o k of the attached pseudo-tag (cid:96) k to the embedding vectors at all positions.", "The new embedding vector e 0: T is written in the following form: e ( k ) 0: T = ( (cid:96) k , e 1 + o k , e 2 + o k , . . . , e T + o k ) .", "We substitute e 1: T in Eq.", "1 by e ( k ) 0: T in the proposed method.", "An intuitive explanation of the role of pseudo-tags is to allow a single model to explicitly recognize differences in homogeneous input, while the purpose of orthogonal vectors is to linearly shift the embedding to the virtual model's designated direction.", "Therefore, by combining these elements, we believe that we can define virtual models within a single model and effectively use the local space for each virtual model.", "Aggregating these virtual models can then result in imitation of ensemble.", "To evaluate the effectiveness of our method, we conducted experiments on two tasks: text classification and sequence labeling.", "We used the IMDB (Andrew et al., 2011), Rotten (Bo and Lillian, Dataset Model Method # params Accuracy SINGLE 12 M 87.03 TFM : 1/K ENS 14 M 81.93 ( 5 . 10 ) GLOVESINGLEENS 12 M 87.30 ( +0 . 27 ) IMDB NORMALENS 108 M 87.67 ( +0 . 64 ) SINGLE 400 M 91.99 TFM : 1/K ENS 1000 M 90.63 ( 1 . 36 ) BERTSINGLEENS 400 M 92.91 ( +0 . 92 ) NORMALENS 3600 M 92.75 ( +0 . 76 ) SINGLE 400 M 81.75 TFM : 1/K ENS 1000 M 82.67 ( +0 . 92 ) Rotten BERTSINGLEENS 400 M 85.01 ( +3 . 26 ) NORMALENS 3600 M 82.57 ( +0 . 82 ) SINGLE 400 M 87.18 TFM : 1/K ENS 1000 M 80.27 ( 6 . 91 ) RCV1 BERTSINGLEENS 400 M 89.16 ( +1 . 98 ) NORMALENS 3600 M 90.01 ( +2 . 83 ) Table 1: Test accuracy and parameter size for text classification tasks.", "2005), and RCV1 (Yiming et al., 2004) datasets for text classification, and the CoNLL-2003 (Sang and Meulder, 2003) and CoNLL-2000 datasets (Sang and Sabine, 2000) for sequence labeling.", "We used the Transformer model (Vaswani et al., 2017) as the base model for all experiments, and its token vector representations were then empowered by pretrained vectors of GloVe, (Jeffrey et al., 2014), BERT (Devlin et al., 2018), or ELMo (Matthew et al., 2018).", "The models are referred to as TFM :G LOVE , TFM :BERT, and TFM :ELM O , respectively.", "1 For TFM :BERT, we incorporated the feature (or hidden) vectors of the final layer in the BERT model as the embedding vectors while adopting drop-net technique (Zhu et al., 2020).", "All the models have dropout layers to assess the complementarity of our method and dropout.", "We compared our method ( SINGLEENS ) to a single model ( SINGLE ), a normal ensemble ( NORMALENS ), and a normal ensemble in which each component has approximately 1 /K parameters 2 ( 1/K ENS ).", "3 Although other ensemble-like methods discussed in Section 2 could have been compared (e.g., snapshot ensemble, knowledge distillation, or dropout during testing to generate predictions and aggregate them), they are imitations of a normal ensemble, and we assumed that the results of a normal ensemble were upper-bound.", "We used K = 9 for reporting the primary results of NOR 1 See Appendix A for detailed experimental settings.", "2 Because BERT requires a fixed number of parameters, we did not reduce the parameters accurately for 1/K TFM :BERT.", "3 See Appendix A for detailed experimental settings.", "MALENS , 1/K ENS , and SINGLEENS .", "We thus prepared nine pseudo-tags { (cid:96) k } 9 k =1 in the same training (trainable) and initialization manner as other embeddings.", "We created untrainable distinct vectors { o k } 9 k =1 using the implementation by Saxe et al. (2013) that was prepared in PyTorch's default function, torch.nn.init.orthogonal .", "We empirically determined the correct scaling for the distinct vectors as 1 out of 1, 3, 5, 10, 30, 50, 100, and the scale that was closest to the model's embedding vectors.", "We obtained the final predictions of K ensemble models by averaging and voting the outputs of individual models for text classification and sequence labeling, respectively.", "The results were obtained by the averaging five distinct runs with different random seeds.", "Data We followed the settings used in the implementation by Kiyono et al. (2018) for data partition.", "4 Our method, SINGLEENS inflates the training data by K times.", "During the inflation, the k -th subset is sampled by bootstrapping (Efron and Tib-shirani, 1993) with the corresponding k -th pseudo-tag.", "For NORMALENS and 1/K ENS , we attempted both bootstrapping and normal sampling, and a higher score was reported.", "Results Table 1 presents the overall results evaluated in terms of accuracy.", "For both TFM :G LOVE and TFM :BERT, SINGLEENS outperformed SINGLE with the same parameter size.", "In our experiments, SINGLEENS achieved the best scores on IMDB and Rotten with TFM :BERT; it recorded 92 .", "91% and 85 .", "01% , which was higher than NORMALENS by 0 .", "16 and 2 .", "44 , respectively with 89 % fewer parameters.", "The standard deviation of the results for the IMDB dataset was, 0.69 and 0.14 4 See Appendix B for data statistics.", "for SINGLE and SINGLEENS , respectively, for TFM :G LOVE , and 0.34 and 0.11, respectively, for TFM :BERT.", "These results support the claim that explicit operations for defining K virtual models have a significant effect for a single model and are complementary to normal dropout.", "Through the series of experiments, we observed that the number of iterations of SINGLEENS was 1.0 1.5 times greater than that of SINGLE .", "Data We followed the instructions of the task settings used in CoNLL-2000 and CoNLL-2003.", "5 We inflated the training data by nine times for SINGLEENS , and normal sampling was used for NORMALENS and 1/K ENS .", "Because bootstrapping was not effective for the task, the results were omitted.", "Results As displayed in Table 2, SINGLEENS surpassed SINGLE by 0.44 and 0.14 on CoNLL-2003 and CoNLL-2000, respectively, for TFM:ELM O with the same parameter size.", "However, NORMALENS produced the best results in this setting.", "The standard deviations of the single model and our methods were 0.08 and 0.05, respectively, on CoNLL-2000.", "Through the series of experiments, we observed that the number of iterations of SINGLEENS was 1.0 1.5 times greater than that of SINGLE .", "In this section, we investigate the properties of our proposed method.", "Unless otherwise specified, we use TFM :BERT and TFM :ELM O on IMDB and CoNLL-2003 for the analysis.", "Significance of pseudo-tags and distinct vectors To assess the significance of using both pseudo-5 The statistics of the datasets are presented in Appendix B. IMDB CoNLL-2003 Setting Accuracy F1 Score SINGLE 91.99 91.93", "tags and distinct vectors, we conducted an ablation study of our method, SINGLEENS .", "We compared our method with the following three settings:", "1) Only pseudo-tags,", "2) Random distinct vectors, and", "3) Random noise.", "In detail, the first setting (Only pseudo-tags) attached the pseudo-tags to the input without adding the corresponding distinct vectors.", "The second setting (Random distinct vectors) randomly shuffles the correspondence between the distinct vectors and pseudo-tags in every iteration during the training.", "Additionally, the third setting (Random noise) adds random vectors as the replacement of the distinct vectors to clarify whether the effect of incorporating distinct vectors is essentially identical to the random noise injection techniques or explicit definition of virtual models in a single model.", "Table 3 shows the results of the ablation study.", "This table indicates that using both pseudo-tags and distinct vectors, which matches the setting of SINGLEENS , leads to the best performance, while the effect is limited or negative if we use pseudo-tags alone or distinct vectors and pseudo-tags without correspondence.", "Thus, this observation explains that the increase in performance can be attributed to the combinatorial use of pseudo-tags and distinct vectors, and not merely data augmentation.", "We can also observe from Table 3 that the performance of SINGLEENS was higher than that of", "3) Random noise.", "Note that the additional vectors by SINGLEENS are fixed in a small number K while those by Random noise are a large number of different vectors.", "Therefore, this observation supports our claim that the explicit definition of virtual models by distinct vectors has substantial positive effects that are mostly irrelevant to the effect of the random noise.", "This observation also supports the assumption that SINGLEENS is complementary to dropout.", "Dropout randomly uses sub-networks by stochastically omitting each hidden unit, which can be interpreted as a variant of Random noise.", "Moreover, it has no specific operations to define an explicitly prepared number of virtual models as SINGLEENS has.", "We conjecture that this difference yields the complementarity that our proposed method and dropout can co-exist.", "Vector addition We investigated the patterns with which distinct vectors should be added:", "1) Emb,", "2) Hidden, and", "3) Emb + Hidden.", "Emb adds distinct vectors only to the embedding, while Hidden adds distinct vectors only to the final feature vectors.", "Emb + Hidden adds distinct vectors to both the embedding and final feature vectors.", "As illustrated in Table 4, adding vectors to the embedding is sufficient for improving performance, while adding vectors to hidden vectors has as adverse effect.", "This observation can be explained by the architecture of Transformer.", "The distinct vectors in the embedding are recursively propagated through the entire network without being absorbed as non-essential information since the Transformer employs residual connections (He et al., 2015).", "Comparison with normal ensembles To evaluate the behavior of our method, we examined the relationship between the performance and the number of models used for training.", "Our experiments revealed that having more than nine models did not result in significant performance improvement; thus, we only assessed the results up to nine models.", "Figs 2 and 3 present the metrics on Rotten and CoNLL-2003, respectively.", "The performance of our method increased with the number of models, which is a general feature of normal ensemble.", "Notably, on Rotten, the accuracy of our method rose while that of other methods did not.", "Investigation of this behavior is left for future work.", "In this paper, we propose a single model ensemble technique called SINGLEENS .", "The principle of SINGLEENS is to explicitly create multiple virtual models in a single model.", "Our experiments demonstrated that the proposed method outperformed single models in both text classification and sequence labeling tasks.", "Moreover, our method with TFM :BERT surpassed the normal ensemble on the IMDB and Rotten datasets, while its parameter size was 1 /K -times smaller.", "The results thus indicate that explicitly creating virtual models within a single model improves performance.", "The proposed method is not limited to the two aforementioned tasks, but can be applied to any NLP as well as other tasks such as machine translation and image recognition.", "Further theoretical analysis can also be performed to elucidate the mechanisms of the proposed method.", "Acknowledgment The research results were achieved by Research and Development of Deep Learning Technology for Advanced Multilingual Speech Translation, the Commissioned Research of National Institute of Information and Communications Technology (NICT), JAPAN.", "The work was partly supported by JSPS KAKENHI Grant Number 19H04162.", "We would like to thank Motoki Sato of Preferred Networks and Shun Kiyono of RIKEN for cooperating in preparing the experimental data.", "We would also like to thank the three anonymous reviewers for their insightful comments." ]
[ "abstain", "objective", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "result", "result", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Effective projection-based cross-lingual word embedding (CLWE) induction critically relies on the iterative self-learning procedure.", "It gradually expands the initial small seed dictionary to learn improved cross-lingual mappings.", "In this work, we present CLASSYMAP , a classification-based approach to self-learning , yielding a more robust and a more effective induction of projection-based CLWEs.", "Unlike prior self-learning methods, our approach allows for integration of diverse features into the iterative process.", "We show the benefits of CLASSYMAP for bilingual lexicon induction: we report consistent improvements in a weakly supervised setup (500 seed translation pairs) on a benchmark with 28 language pairs.", "Cross-lingual word embeddings (CLWEs), that is, representations of words in a shared crosslingual vector space, enable multilingual modeling of meaning and facilitate cross-lingual transfer for downstream NLP tasks (Ruder et al., 2019).", "One of their primary use cases is bilingual lexicon induction (BLI), that is, learning translation correspondences across languages which benefit the development of core language technology also for resource-poor languages and domains (Adams et al., 2017; Smith et al., 2017; Heyman et al., 2018; Hangya et al., 2018; Vulic et al., 2019).", "Earlier work focused on joint CLWE induction from bilingual corpora, relying on word(Kle-mentiev et al., 2012; Gouws and Sgaard, 2015), sentence(Zou et al., 2013; Hermann and Blunsom, 2014; Coulmance et al., 2015; Levy et al., 2017), or document-level supervision (Sgaard et al., 2015; Vulic and Moens, 2016).", "However, recent focus is predominantly on post-hoc alignment of independently trained monolingual word embeddings: the Equal contribution.", "so-called projection-based or mapping approaches (Mikolov et al., 2013; Conneau et al., 2018; Joulin et al., 2018; Artetxe et al., 2018b; Patra et al., 2019).", "Such methods are particularly suitable for weakly supervised learning setups: they support CLWE induction with only as much as few thousand word translation pairs as the bilingual supervision.", "1 One critical component of weakly supervised projection-based CLWEs is a self-learning procedure that iteratively refines the initial seed dictionary to learn projections of increasingly higher quality.", "This process leads to substantial improvements of the initially mapped space, especially with smaller seed dictionaries (Artetxe et al., 2017; Vulic et al., 2019).", "However, current self-learning procedures are still rather basic, typically relying only on direct extraction of (mutual) nearest neighbors from the current shared space (Conneau et al., 2018; Artetxe et al., 2018b; Glavas et al., 2019).", "In this work, we propose a more sophisticated self-learning procedure for weakly supervised projection-based CLWE methods, and show its benefits for a wide range of language pairs.", "We frame self-learning as iterative classification-based process, which yields several benefits over the previously used self-learning mechanisms.", "1) It enables integration of a variety of heterogeneous features at different levels of granularity (e.g., word-level vs. orthographic features); some trans-1 In the extreme, fully unsupervised projection-based CLWEs extract such seed bilingual lexicons from scratch on the basis of monolingual data only (Conneau et al., 2018; Artetxe et al., 2018b; Hoshen and Wolf, 2018; Alvarez-Melis and Jaakkola, 2018; Chen and Cardie, 2018; Mohiuddin and Joty, 2019, inter alia ).", "However, as shown in recent comparative empirical analyses (Glavas et al., 2019; Vulic et al., 2019), using seed sets of only 500-1,000 translation pairs, with all other components equal, always outperforms fully unsupervised methods.", "Therefore, we focus on a more natural weakly supervised setup (Artetxe et al., 2020) instead, i.e., we assume the existence of at least 500 seed translations for each language pair in consideration.", "lation cues (e.g., subword-level overlap) have been ignored by previous self-learning approaches.", "2) It allows us to control for the reliability of translation pairs considered as candidates for the dictionary updates in the current iteration.", "Effectively, this helps reduce noise in the process as the training dictionary grows.", "3) As suggested by prior work on classification-based BLI (Irvine and Callison-Burch, 2017; Heyman et al., 2017), framing the actual BLI task as a classification problem results in further gains in the final BLI performance.", "We extensively evaluate our classification-based self-learning procedure, termed CLASSYMAP , on the standard BLI data set (Glavas et al., 2019) spanning 28 pairs of diverse languages.", "The integration of the proposed self-learning method into VECMAP (Artetxe et al., 2018b), a state-of-the-art projection-based CLWE framework, yields substantial gains over previous self-learning procedures.", "2 We demonstrate that the improvements are indeed achieved through the synergy of diverse features used by the classifier.", "We also demonstrate further BLI improvements when we treat BLI as a supervised classification-based task.", "Projection-Based CLWE Methods (linearly) align independently trained monolingual word embeddings X 1 of the source language L 1 and X 2 (target language L 2 ), using a seed word translation dictionary D (Mikolov et al., 2013; Artetxe et al., 2018a).", "Working in weakly supervised setups, we assume the existence of some translation pairs ( 500 pairs) in D .", "Let X 1 ,D X 1 and X 2 ,D X 2 refer to the row-aligned subsets of monolingual embedding spaces containing vectors of translation pairs from D .", "Those are used to learn orthogonal transformations T 1 and T 2 that define the final shared cross-lingual space W cl = W 1 W 2 , where W 1 = X 1 T 1 and W 2 = X 2 T 2 .", "Our departure point is a standard self-learning setup from related work (Artetxe et al., 2018b; Conneau et al., 2018), outlined in the following.", "At each iteration k , the dictionary D ( k ) is first used to learn the joint space W ( k ) cl = W ( k ) 1 W ( k ) 2 .", "2 We use VECMAP due to its very competitive and robust BLI performance according to the recent comparative studies (Glavas et al., 2019; Vulic et al., 2019; Doval et al., 2019).", "We note that our methodology is equally applicable to other projection-based methods that employ self-learning e.g., (Conneau et al., 2018; Mohiuddin and Joty, 2019), and our preliminary results with other methods suggest the similar benefits stemming from the classification-based approach.", "The nearest neighbours in W ( k ) cl are then used to extract the new dictionary D ( k +1) .", "Previous work typically relies on a variant of mutual nearest neighbours in the aligned embedding space of the current iteration to select likely translation candidates for the next.", "However, as hinted by Lubin et al. (2019), that procedure still results in many noisy candidates inserted in the extended seed sets, and the error may get amplified over subsequent iterations.", "New Self-Learning Procedure.", "Therefore, we propose a more versatile self-learning process.", "We train a supervised classifier in each iteration: given a word pair, it produces a probability score denot-ing to which extent the pair is a correct translation pair.", "The classifier can be fed a wide range of features on the character, subword, and word level.", "We apply the classifier in two ways.", "First, at iteration k the classification scores are used to select likely translation candidates which are added to the dictionary D ( k +1) for iteration k + 1 .", "Second, similar to Heyman et al. (2017), at test time we use the classifier scores to rerank translation candidates produced by 1) finding nearest neighbours in the final aligned embedding space and 2) considering orthographically similar candidates.", "3 A high-level overview of the proposed classification-based self-learning procedure is outlined in Algorithm 1.", "Self-Learning: Components.", "For implementing the AlignEmbeddings operation (see Algorithm 1) we rely on the VECMAP 4 system (Artetxe et al., 2018b) in its supervised variant.", "The nn 3 We later show in 3 that both usages are beneficial for BLI.", "The former yields improved CLWEs directly.", "We plan to probe the usefulness of the CLWEs in other tasks beyond BLI in future work.", "The latter (reranking) step, on the other hand, is tied to the BLI task in particular.", "For this reason we later report all BLI results both with and without reranking.", "4 https://github.com/artetxem/vecmap function returns word pairs that are nearest neighbours in a given aligned embedding space.", "The TrainClassifier functionality can be instantiated using any standard classification framework.", "In this work, we opt for a simple a multi-layer perceptron with a single hidden layer.", "A very important design choice concerns generating negative training examples for the classifier.", "All word pairs in the dictionary at current iteration D ( k ) are used as positive examples.", "For each positive pair ( s, t ) , we generate two negative examples: 1) ( s, x ) , where x is sampled uniformly from N o target words which are orthographically (measured by edit distance) most similar to s ; 2) ( s, y ) , where y is sampled uniformly from N c target words closest (by cosine) to s in the current space W ( k ) cl .", "This strategy performed considerably better than randomly generating negative examples.", "The intuition is as follows: at test time the classifier must operate on word pairs that are generated using nearest neighbour search.", "Such word pairs are not random, but are rather very close in the aligned embedding space and are often orthographically similar.", "Thus, this strategy for generating negative samples makes the train conditions for the classifier better reflect the test conditions.", "Features.", "The classification-based approach allows for the integration of a wide spectrum of diverse features that capture different word translation evidence.", "We outline the sets of features used in this work, computed for each word pair ( s, t ) .", "F1.", "Edit distance Levenshtein and Jaro-Winkler distance between s and t (Cohen et al., 2003).", "Following Heyman et al. (2017) we also include normalized edit distance, log of the rank of t in a list sorted by edit distance with respect to s , as well as a product of these two values.", "F2.", "Cosine similarity of s and t in W ( k ) cl (at iter k ).", "F3.", "Aligned embeddings of s and t , PCA-reduced to 10 dimensions (20 features in total).", "F4.", "Normalized n-gram overlap (Saric et al., 2012); F5.", "Character n-grams we extract all character n-grams and use 2 feature selection to select the 10 most indicative ones.", "The intuition is to allow the model to recognize indicative prefixes or suffixes.", "F6.", "Subword-level similarity we use multilingual subword embeddings (SWEs) based on BPEs (Heinzerling and Strube, 2018).", "We add the following features:", "i) we average the BPEs of s and t and calculate cosine similarity of the resulting vectors,", "ii) the pairwise maximum cosine similarity of all pairs of SWEs (one from s and the other from t ), and", "iii) the Earth Mover's distance between the two sets of SWEs (Kusner et al., 2015).", "F7.", "Frequencies we provide the rank of the word in a list of all words sorted by frequency.", "The ranks are normalized by the number of words.", "At test time, if we use the classifier to perform the final reranking, we take for each source word s a set of candidate target word translations as the union of 1) the top N ro target word neighbours of s by edit distance, and 2) the top N rc target word neighbours of s by cosine in the final aligned W cl .", "We then score the N ro + N rc candidates using the classifier from the last self-learning iteration.", "Monolingual Vectors and BLI Data.", "Following prior work (Artetxe et al., 2018b; Glavas et al., 2019), we start from monolingual fastText vectors trained on full Wikipedias for each language (Bo-janowski et al., 2017); vocabularies are trimmed to the 200K most frequent words.", "We evaluate on the standard BLI dataset from Glavas et al. (2019): it comprises 28 language pairs with a good balance of typologically similar and distant languages: English ( EN ), German ( DE ), Italian ( IT ), French ( FR ), Russian ( RU ), Croatian ( HR ), Turkish ( TR ), and Finnish ( FI ).", "As our focus is on weakly supervised setups, we use only 500 translation pairs as our initial seed dictionary.", "We report BLI performance using the standard Precision@1 (P@1) measure.", "Classifier Details.", "We use the Adam optimizer (Kingma and Ba, 2015) and regularize the model via (cid:96) 2 -penalty on the weights and early stopping on 10% of held-out data.", "Early stopping is performed for each language pair separately, while other hy-perparameter values are found by grid search 5 maximizing a three-fold cross-validation score on the training data for a randomly selected language pair ( EN HR ), and reused in all other experiments.", "Hyperparameters.", "We find values for other hy-perparameters on held-out data for a randomly chosen language pair: EN HR .", "Unless otherwise stated, we fix them to the following values for all other experiments and language pairs.", "In Algo-5 Hidden layer sizes explored are 3, 5, 10, 20, 25 and regularization strengths are 0.0001, 0.01, and 1.", "The values selected by grid search were 25 and 1, respectively.", "rithm 1, P = 1000 , K = 500 , n = 30 .", "Further, we sample 2 negative examples per each positive example from the sets of size N o = N c = 5 .", "N ro = N rc = 3 when doing the final reranking.", "We note that more careful tuning of these values could lead to further improvements in results.", "Baselines.", "We compare to the VECMAP system (Artetxe et al., 2018b) in its semi-supervised variant as a robust and highly competitive self-learning framework (Glavas et al., 2019; Vulic et al., 2019).", "The main results over a representative selection of language pairs and setups are provided in Table 1.", "Full results over all 28 pairs are provided in Appendix A. The results indicate several important findings.", "First, classification-based self-learning is more powerful than the standard VECMAP self-learning: we observe gains on 22/28 pairs using CLASSYMAP without the final reranking step, even without language pair-dependent fine-tuning.", "Second, framing BLI as a classification task leads to further gains: we report improvements on 25/28 pairs using CLASSYMAP with the final reranking step over both supervised and semi-supervised VECMAP variants.", "Using reranking with CLASSYMAP seems useful across the board.", "6 As a side finding, our results also revalidate the evident usefulness of the self-learning procedure for weakly supervised setups in general (Vulic et al., 2019): the average P@1 score across All languages of a supervised VECMAP method based on the same initial dictionary, but without any self-learning, is only 0.111, while we report the average of 0.365 (with final reranking) in Table 1.", "Importantly, the gains seem more pronounced for more difficult, typologically dissimilar, and morphologically rich language pairs such as TR RU or DE TR , than for similar languages such as IT FR , with more isomorphic monolingual spaces (Sgaard et al., 2018).", "To analyze this further, we have run additional experiments on the BLI evaluation sets of Vulic et al. (2019) comprising more typologically distant language pairs 7 , with similar conclusions.", "For instance, 6 We have also probed a variant where we learn a classifier for the final reranking step on top of VECMAP 's output after its self-learning procedure.", "However, as suggested by the results in Table 1, this leads to drops in performance compared to standard semi-supervised VECMAP .", "We speculate that this is due to higher levels of noise in the final VECMAP dictionary.", "7 github.com/cambridgeltl/panlex-bli VECMAP (sup) VECMAPCLASSYMAPTR-HR .030 .160/.171 .200/ .227 DE-TR .050 .207/.203 .221 /.268", "with 500 seed pairs CLASSYMAP with reranking scores 24.6 P@1 for Estonian-Esperanto and 16.6 for Hungarian-Basque.", "The strongest baselines achieve P@1 of 20.0 and 13.8, respectively.", "In sum, our classification-based approach holds promise to guide future work especially on distant pairs.", "Step Size and the Number of Iterations.", "We now analyze how two vital components of self-learning impact the final BLI scores: 1) the number of added dictionary entries per iteration (i.e., step size, see Table 3), and 2) the number of iterations (Figure 1).", "For brevity, we run the analyses on several diffi-cult language pairs: DE RU , TR FI , HR FR , and EN FI .", "The results suggest that the step size has only moderate impact on the final scores, and is language pair-dependent.", "However, all three options improve over the baseline self-learning method, and final reranking is again useful across the board.", "According to Figure 1, the optimal number of iterations is also pair-dependent: TR FI performance steadily increases over time, while DE RU hits the peak after only 5 iterations and steadily declines afterwards.", "This finding calls for a more careful tuning of this parameter in future work.", "Feature Ablation Analysis.", "We also perform an ablation analysis, reported in Table 4.", "Overall, the results suggest that different features contribute to the final performance.", "This corroborates our hypothesis that one of the main advantages of the classification-based approach is its ability to fuse different translation evidence.", "However, there are cases (e.g., using BPE for DE RU or TR FI ) where 500 1k 3k 5k DE RU .111 / .193 / .212 / .239 .232 / .191 / .224 / .249 .301 / .194 / .244 / .277 .303 / .192 / .262 / .290 EN FI .081 / .238 / .299 / .350 .219 / .238 / .313 / .363 .320 / .238 / .318 / .362 .352 / .240 / .330 / .370 HR FR .053 / .352 / .363 / .411 .178 / .351 / .368 / .406 .325 / .352 / .376 / .420 .353 / .359 / .372 / .417 TR FI .034 / .200 / .217 / .235 .111 / .197 / .234 / .249 .213 / .197 / .246 / .266 .242 / .198 / .258 / .274 Table 2: Performance for varying initial dictionary sizes (500, 1k, 3k, 5k seed translation pairs).", "a feature set can negatively affect performance.", "In sum, this small ablation study warrants finer-grained and language pair-dependent feature selection in future work.", "Seed Dictionary Size.", "We also provide additional results when varying the size of the initial seed dictionary in Table 2.", "The main finding is that, while the absolute BLI scores are naturally higher with larger seed dictionaries, CLASSYMAP remains useful even with much larger dictionary sizes (check the results with 3k and 5k seed pairs).", "CLASSYMAP with reranking remains the strongest BLI method, corroborating our previous findings.", "We introduced CLASSYMAP , a novel classification-based approach to self-learning, which is a crucial component of projection-based cross-lingual word embedding induction models in low-data regimes.", "We reported its usefulness and robustness across a wide spectrum of diverse language pairs in the BLI task, confirming the usefulness of learning classi-fiers both as part of the self-learning procedure as well as for the final word retrieval in the BLI task.", "This proof-of-concept work opens up a wide spectrum of interesting avenues for future research, including the use of more powerful classifiers, more sophisticated features (e.g., character-level transformers), and fine-grained linguistic analyses on the importance of disparate features over different language pairs.", "One particularly exciting direction is the application of our classification-based self-learning framework on top of the most recent methods that induce bilingual spaces via non-linear alignments (Glavas and Vulic, 2020; Mohiuddin and Joty, 2020).", "The code is available online at: https://github.com/mladenk42/ClassyMap .", "IV and AK are supported by the ERC Consolidator Grant LEXICAL (no 648909) awarded to AK.", "GG is supported by the Eliteprogramm of the Baden-Wurttemberg Stiftung (AGREE grant).", "We thank the reviewers for their insightful suggestions." ]
[ "abstain", "abstain", "method", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "method", "other", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "objective", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "method", "abstain", "abstain", "other", "other", "other", "other" ]
[ "We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap.", "We handle such disjoint data by treating annotations for unobserved formalisms as latent structured variables.", "Building on state-of-the-art baselines, we show improvements both in frame-semantic parsing and semantic dependency parsing by modeling them jointly.", "Our code is open-source and available at https://github.com/ Noahs-ARK/NeurboParser .", "Semantic parsing aims to automatically predict formal representations of meaning underlying natural language, and has been useful in question answering (Shen and Lapata, 2007), text-to-scene generation (Coyne et al., 2012), dialog systems (Chen et al., 2013) and social-network extraction (Agarwal et al., 2014), among others.", "Various formal meaning representations have been developed corresponding to different semantic theories (Fill-more, 1982; Palmer et al., 2005; Flickinger et al., 2012; Banarescu et al., 2013).", "The distributed nature of these efforts results in a set of annotated resources that are similar in spirit, but not strictly compatible.", "A major axis of structural divergence in semantic formalisms is whether based on spans (Baker et al., 1998; Palmer et al., 2005) or dependencies (Surdeanu et al., 2008; Oepen et al., 2014; Banarescu et al., 2013; Copestake et al., 2005, inter alia ).", "Depending on application requirements, either might be most useful in a given situation.", "Learning from a union of these resources seems promising, since more data almost always translates into better performance.", "This is indeed the case for two prior techniquesparameter sharing Only a few books fell in the reading arg1 room .", "Figure 1 : An example sentence from the FrameNet 1.5 corpus, shown with an author-annotated DM semantic dependency graph (above) and frame-semantic annotation (below).", "Two more gold frames (and their arguments) have been omitted for space.", "(FitzGerald et al., 2015; Kshirsagar et al., 2015), and joint decoding across multiple formalisms using cross-task factors that score combinations of substructures from each (Peng et al., 2017).", "Parameter sharing can be used in a wide range of multitask scenarios, when there is no data overlap or even any similarity between the tasks (Collobert and Weston, 2008; Sgaard and Goldberg, 2016).", "But techniques involving joint decoding have so far only been shown to work for parallel annotations of dependency-based formalisms, which are structurally very similar to each other (Llus et al., 2013; Peng et al., 2017).", "Of particular interest is the approach of Peng et al., where three kinds of semantic graphs are jointly learned on the same input, using parallel annotations.", "However, as new annotation efforts cannot be expected to use the same original texts as earlier efforts, the utility of this approach is limited.", "We propose an extension to Peng et", "al.'s formulation which addresses this limitation by considering disjoint resources , each containing only a single kind of annotation.", "Moreover, we consider structurally divergent formalisms, one dealing with semantic spans and the other with semantic 1492 dependencies.", "We experiment on frame-semantic parsing (Gildea and Jurafsky, 2002; Das et al., 2010), a span-based semantic role labeling (SRL) task ( 2.1), and on a dependency-based minimum recursion semantic parsing (DELPH-IN MRS, or DM; Flickinger et al., 2012) task ( 2.2).", "See Figure 1 for an example sentence with gold FrameNet annotations, and author-annotated DM representations.", "Our joint inference formulation handles missing annotations by treating the structures that are not present in a given training example as latent variables ( 3).", "1 Specifically, semantic dependencies are treated as a collection of latent variables when training on FrameNet examples.", "Using this latent variable formulation, we present an approach for relating spans and dependencies, by explicitly scoring affinities between pairs of potential spans and dependencies.", "Because there are a huge number of such pairs, we limit our consideration to only certain pairsour design is inspired by the head rules of Surdeanu et al. (2008).", "Further possible span-dependency pairs are pruned using an 1 -penalty technique adapted from sparse structure learning ( 5). Neural network architectures are used to score frame-semantic structures, semantic dependencies, as well as cross-task structures ( 4). To summarize, our contributions include: using a latent variable formulation to extend cross-task scoring techniques to scenarios where datasets do not overlap; learning cross-task parts across structurally divergent formalisms; and using an 1 -penalty technique to prune the space of cross task parts. Our approach results in a new state-of-the-art in frame-semantic parsing, improving prior work by 0.8% absolute F 1 points ( 6), and achieves competitive performance on semantic dependency parsing. Our code is available at https://github.com/Noahs-ARK/ NeurboParser . 2 Tasks and Related Work We describe the two tasks addressed in this workframe-semantic parsing ( 2.1) and semantic dependency parsing ( 2.2)and discuss how 1 Following past work on support vector machines with latent variables (Yu and Joachims, 2009), we use the term latent variable, even though the model is not probabilistic. their structures relate to each other ( 2.3). 2.1 Frame-Semantic Parsing Frame-semantic parsing is a span-based task, under which certain words or phrases in a sentence evoke semantic frames . A frame is a group of events, situations, or relationships that all share the same set of participant and attribute types, called frame elements or roles . Gold supervision for frame-semantic parses comes from the FrameNet lexicon and corpus (Baker et al., 1998). Concretely, for a given sentence, x , a frame-semantic parse y consists of: a set of targets , each being a short span (usu-ally a single token 2 ) that evokes a frame; for each target t , the frame f that it evokes; and for each frame f , a set of non-overlapping argument spans in the sentence, each argument a = ( i, j, r ) having a start token index i , end token index j and role label r . The lemma and part-of-speech tag of a target comprise a lexical unit (or LU ). The FrameNet ontology provides a mapping from an LU to the set of possible frames it could evoke, F . Every frame f F is also associated with a set of roles, R f under this ontology. For example, in Figure 1, the LU fall.v evokes the frame MOTION DIRECTIONAL . The roles THEME and PLACE (which are specific to MOTION DIRECTIONAL ), are filled by the spans Only a few books and in the reading room respectively. LOCATIVE RELATION has other roles (PROFILED REGION , ACCESSIBILITY , DEIXIS , etc.) which are not realized in this sentence. In this work, we assume gold targets and LUs are given, and parse each target independently, following the literature (Johansson and Nugues, 2007; FitzGerald et al., 2015; Yang and Mitchell, 2017; Swayamdipta et al., 2017, inter alia ). Moreover, following Yang and Mitchell (2017), we perform frame and argument identification jointly. Most prior work has enforced the constraint that a role may be filled by at most one argument span, but following Swayamdipta et al. (2017) we do not impose this constraint, requiring only that arguments for the same target do not overlap. 2 96.5% of targets in the training data are single tokens. 1493 2.2 Semantic Dependency Parsing Broad-coverage semantic dependency parsing ( SDP ; Oepen et al., 2014, 2015, 2016) represents sentential semantics with labeled bilexical dependencies. The SDP task mainly focuses on three semantic formalisms, which have been converted to dependency graphs from their original annotations. In this work we focus on only the DELPH-IN MRS ( DM ) formalism. Each semantic dependency corresponds to a labeled, directed edge between two words. A single token is also designated as the top of the parse, usually indicating the main predicate in the sentence. For example in Figure 1, the left-most arc has head Only , dependent few , and label arg1 . In semantic dependencies, the head of an arc is analogous to the target in frame semantics, the destination corresponds to the argument, and the label corresponds to the role. The same set of labels are available for all arcs, in contrast to the frame-specific roles in FrameNet. 2.3 Spans vs. Dependencies Early semantic role labeling was span-based (Gildea and Jurafsky, 2002; Toutanova et al., 2008, inter alia ), with spans corresponding to syntactic constituents. But, as in syntactic parsing, there are sometimes theoretical or practical reasons to prefer dependency graphs. To this end, Surdeanu et al. (2008) devised heuristics based on syntactic head rules (Collins, 2003) to transform PropBank (Palmer et al., 2005) annotations into dependencies. Hence, for PropBank at least, there is a very direct connection (through syntax) between spans and dependencies. For many other semantic representations, such a direct relationship might not be present. Some semantic representations are designed as graphs from the start (Hajic et al., 2012; Banarescu et al., 2013), and have no gold alignment to spans. Conversely, some span-based formalisms are not annotated with syntax (Baker et al., 1998; He et al., 2015), 3 and so head rules would require using (noisy and potentially expensive) predicted syntax. Inspired by the head rules of Surdeanu et al. (2008), we design cross-task parts, without relying 3 In FrameNet, phrase types of arguments and their grammatical function in relation to their target have been annotated. But in order to apply head rules, the internal structure of arguments (or at least their semantic heads) would also require syntactic annotations. on gold or predicted syntax (which may be either unavailable or error-prone) or on heuristics. 3 Model Given an input sentence x , and target t with its LU , denote the set of valid frame-semantic parses ( 2.1) as Y ( x , t, ) , and valid semantic dependency parses as Z ( x ) . 4 We learn a parameterized function S that scores candidate parses. Our goal is to jointly predict a frame-semantic parse and a semantic dependency graph by selecting the highest scoring candidates: ( y , z ) = arg max ( y , z ) Y ( x ,t, ) Z ( x ) S ( y , z , x , t, ) . (1) The overall score S can be decomposed into the sum of frame SRL score S f , semantic dependency score S d , and a cross-task score S c : S ( y , z , x , t, ) = S f ( y , x , t, ) + S d ( z , x ) + S c ( y , z , x , t, ) . (2) S f and S c require access to the target and LU, in addition to x , but S d does not. For clarity, we omit the dependence on the input sentence, target, and lexical unit, whenever the context is clear. Below we describe how each of the scores is computed based on the individual parts that make up the candidate parses. Frame SRL score. The score of a frame-semantic parse consists of the score for a predicate part, s f ( p ) where each predicate is defined as a combination of a target t , the associated LU, , and the frame evoked by the LU, f F ; the score for argument parts, s f ( a ) , each associated with a token span and semantic role from R f . Together, this results in a set of frame-semantic parts of size O ( n 2 |F | |R f | ) . 5 The score for a frame semantic structure y is the sum of local scores of parts in y : S f ( y ) = X y i y s f ( y i ) . (3) The computation of s f is described in 4.2. 4 For simplicity, we consider only a single target here; handling of multiple targets is discussed in 6. 5 With pruning (described in 6) we reduce this to a number of parts linear in n . Also, |F | is usually small (averaging 1.9), as is |R f | (averaging 9.5). 1494 includes include.v Inclusion Evidence to support this argument Total Figure 2 : An example of cross-task parts from the FrameNet 1.5 development set. We enumerate all unlabeled semantic dependencies from the first word of the target ( includes ) to any token inside the span. The red bolded arc indicates the prediction of our model. Semantic dependency score. Following Martins and Almeida (2014), we consider three types of parts in a semantic dependency graph: semantic heads, unlabeled semantic arcs, and labeled semantic arcs. Analogous to Equation 3, the score for a dependency graph z is the sum of local scores: S d ( z ) = X z j z s d ( z j ) , (4) The computation of s d is described in 4.3. Cross task score. In addition to task-specific parts, we introduce a set C of cross-task parts. Each cross-task part relates an argument part from y to an unlabeled dependency arc from z . Based on the head-rules described in 2.3, we consider unlabeled arcs from the target to any token inside the span. 6 Intuitively, an argument in FrameNet would be converted into a dependency from its target to the semantic head of its span. Since we do not know the semantic head of the span, we consider all tokens in the span as potential modifiers of the target. Figure 2 shows examples of cross-task parts. The cross-task score is given by S c ( y , z ) = X ( y i ,z j ) ( y z ) C s c ( y i , z j ) . (5) The computation of s c is described in 4.4. In contrast to previous work (Llus et al., 2013; Peng et al., 2017), where there are parallel annotations for all formalisms, our input sentences contain only one of the twoeither the span-based frame SRL annotations, or semantic dependency graphs from DM. To handle missing annotations, we treat semantic dependencies z as latent when 6 Most targets are single-words ( 2.1). For multi-token targets, we consider only the first token, which is usually content-bearing. decoding frame-semantic structures. 7 Because the DM dataset we use does not have target annotations, we do not use latent variables for frame semantic structures when predicting semantic dependency graphs. The parsing problem here reduces to z = arg max z Z S d ( z ) , (6) in contrast with Equation 1 . 4 Parameterizations of Scores This section describes the parametrization of the scoring functions from 3. At a very high level: we learn contextualized token and span vectors using a bidirectional LSTM (biLSTM; Graves, 2012) and multilayer perceptrons (MLPs) ( 4.1); we learn lookup embeddings for LUs, frames, roles, and arc labels; and to score a part, we combine the relevant representations into a single scalar score using a (learned) low-rank multilinear mapping. Scoring frames and arguments is detailed in 4.2, that of dependency structures in 4.3, and 4.4 shows how to capture interactions between arguments and dependencies. All parameters are learned jointly, through the optimization of a multitask objective ( 5). Tensor notation. The order of a tensor is the number of its dimensionsan order-2 tensor is a matrix and an order-1 tensor is a vector. Let denote tensor product; the tensor product of two order-2 tensors A and B yields an order4 tensor where ( A B ) i,j,k,l = A i,j B k,l . We use h , i to denote inner products. 4.1 Token and Span Representations The representations of tokens and spans are formed using biLSTMs followed by MLPs. Contextualized token representations. Each token in the input sentence x is mapped to an embedding vector. Two LSTMs (Hochreiter and Schmidhuber, 1997) are run in opposite directions over the input vector sequence. We use the concatenation of the two hidden representations at each position i as a contextualized word embedding for each token: h i = (cid:2) h i ; h i (cid:3) . (7) 7 Semantic dependency parses over a sentence are not constrained to be identical for different frame-semantic targets. 1495 Span representations. Following Lee et al. (2017), span representations are computed based on boundary word representations and discrete length and distance features. Concretely, given a target t and its associated argument a = ( i, j, r ) with boundary indices i and j , we compute three features t ( a ) based on the length of a , and the distances from i and j to the start of t . We concatenate the token representations at a 's boundary with the discrete features t ( a ) . We then use a two-layer tanh -MLP to compute the span representation: g span ( i, j ) = MLP span (cid:0) [ h i ; h j ; t ( a )] (cid:1) . (8) The target representation g tgt ( t ) is similarly computed using a separate MLP tgt , with a length feature but no distance features. 4.2 Frame and Argument Scoring As defined in 3, the representation for a predicate part incorporates representations of a target span, the associated LU and the frame evoked by the LU. The score for a predicate part is given by a multilinear mapping: g pred ( f ) = g fr ( f ) g tgt ( t ) g lu ( ) (9a) s f ( p ) = (cid:10) W , g pred ( f ) (cid:11) , (9b) where W is a low-rank order-3 tensor of learned parameters, and g fr ( f ) and g lu ( ) are learned lookup embeddings for the frame and LU. A candidate argument consists of a span and its role label, which in turn depends on the frame, target and LU. Hence the score for argument part, a = ( i, j, r ) is given by extending definitions from Equation 9: g arg ( a ) = g span ( i, j ) g role ( r ) , (10a) s f ( a ) = (cid:10) W U , g pred ( f ) g arg ( a ) (cid:11) , (10b) where U is a low-rank order-2 tensor of learned parameters and g role ( r ) is a learned lookup embedding of the role label. 4.3 Dependency Scoring Local scores for dependencies are implemented with two-layer tanh -MLPs, followed by a final linear layer reducing the represenation to a single scalar score. For example, let u = i j denote an unlabeled arc (ua). Its score is: g ua ( u ) = MLP ua (cid:0) [ h i ; h j ] (cid:1) (11a) s d ( u ) = w ua g ua ( u ) , (11b) where w ua is a vector of learned weights. The scores for other types of parts are computed similarly, but with separate MLPs and weights. 4.4 Cross-Task Part Scoring As shown in Figure 2, each cross-task part c consists of two first-order parts: a frame argument part a , and an unlabeled dependency part, u . The score for a cross-task part incorporates both: s c ( c ) = (cid:10) W U V , g pred ( f ) g arg ( a ) w ua g ua ( u ) (cid:11) , (12) where V is a low-rank order-2 tensor of parameters. Following previous work (Lei et al., 2014; Peng et al., 2017), we construct the parameter tensors W , U , and V so as to upper-bound their ranks. 5 Training and Inference All parameters from the previous sections are trained using a max-margin training objective ( 5.1). For inference, we use a linear programming procedure, and a sparsity-promoting penalty term for speeding it up ( 5.2). 5.1 Max-Margin Training Let y denote the gold frame-semantic parse, and let ( y , y ) denote the cost of predicting y with respect to y . We optimize the latent structured hinge loss (Yu and Joachims, 2009), which gives a subdifferentiable upper-bound on : L ( y ) = max ( y , z ) YZ { S ( y , z ) + ( y , y ) } max z Z { S ( y , z ) } . (13) Following Martins and Almeida (2014), we use a weighted Hamming distance as the cost function, where, to encourage recall, we use costs 0.6 for false negative predictions and 0.4 for false positives. Equation 13 can be evaluated by applying the same max-decoding algorithm twiceonce with cost-augmented inference (Crammer et al., 2006), and once more keeping y fixed. Training then aims to minimize the average loss over all training instances. 8 Another potential approach to training a model on disjoint data would be to marginalize out the 8 We do not use latent frame structures when decoding semantic dependency graphs ( 3). Hence, the loss reduces to structured hinge (Tsochantaridis et al., 2004) when training on semantic dependencies. 1496 latent structures and optimize the conditional log-likelihood (Naradowsky et al., 2012). Although max-decoding and computing marginals are both NP-hard in general graphical models, there are more efficient off-the-shelf implementations for approximate max-decoding, hence, we adopt a max-margin formulation. 5.2 Inference We formulate the maximizations in Equation 13 as 01 integer linear programs and use AD 3 to solve them (Martins et al., 2011). We only enforce a non-overlapping constraint when decoding FrameNet structures, so that the argument identification subproblem can be efficiently solved by a dynamic program (Kong et al., 2016; Swayamdipta et al., 2017). When decoding semantic dependency graphs, we enforce the determinism constraint (Flanigan et al., 2014), where certain labels may appear on at most one arc outgoing from the same token. Inference speedup by promoting sparsity. As discussed in 3, even after pruning, the number of within-task parts is linear in the length of the input sentence, so the number of cross-task parts is quadratic. This leads to potentially very slow inference. We address this problem by imposing an 1 penalty on the cross-task part scores: L (cid:0) y (cid:1) + X ( y i ,z j ) C (cid:12)(cid:12) s c ( y i , z j ) (cid:12)(cid:12) , (14) where is a hyperparameter, set to 0 . 01 as a practical tradeoff between efficiency and development set performance. Whenever the score for a cross-task part is driven to zero, that part's score no longer needs to be considered during inference. It is important to note that by promoting sparsity this way, we do not prune out any candidate solutions. We are instead encouraging fewer terms in the scoring function, which leads to smaller, faster inference problems even though the space of feasible parses is unchanged. The above technique is closely related to a line of work in estimating the structure of sparse graphical models (Yuan and Lin, 2007; Friedman et al., 2008), where an 1 penalty is applied to the inverse covariance matrix in order to induce a smaller number of conditional dependencies between variables. To the best of our knowledge, we are the first to apply this technique to the output of neural scoring functions. Here, we are interested in learn-Train Exemplars Dev. Test FN 1.5 17,143 153,952 2,333 4,457 FN 1.7 19,875 192,460 2,308 6,722 DM id 33,961 -1,692 1,410 DM ood --1,849 Table 1 : Number of instances in datasets. ing sparse graphical models only because they result in faster inference, not because we have any a priori belief about sparsity. This results in roughly a 14 speedup in our experiments, without any significant drop in performance. 6 Experiments Datasets. Our model is evaluated on two different releases of FrameNet: FN 1.5 and FN 1.7, 9 using splits from Swayamdipta et al. (2017). Following Swayamdipta et al. (2017) and Yang and Mitchell (2017), each target annotation is treated as a separate training instance. We also include as training data the exemplar sentences, each annotated for a single target, as they have been reported to improve performance (Kshirsagar et al., 2015; Yang and Mitchell, 2017). For semantic dependencies, we use the English DM dataset from the SemEval 2015 Task 18 closed track (Oepen et al., 2015). 10 DM contains instances from the WSJ corpus for training and both in-domain (id) and out-of-domain (ood) test sets, the latter from the Brown corpus. 11 Table 1 summarizes the sizes of the datasets. Baselines. We compare FN performance of our joint learning model (FULL ) to two baselines: BASIC : A single-task frame SRL model, trained using a structured hinge objective. NOCTP: A joint model without cross-task parts. It demonstrates the effect of sharing parameters in word embeddings and LSTMs (like in FULL ). It does not use latent semantic dependency structures, and aims to minimize the sum of training losses from both tasks. We also compare semantic dependency parsing performance against the single task model by Peng 9 https://FN.icsi.berkeley.edu/ fndrupal/ 10 http://sdp.delph-in.net/ . The closed track does not have access to any syntactic analyses. The impact of syntactic features on SDP performance is extensively studied in Ribeyre et al. (2015). 11 Our FN training data does not overlap with the DM test set. We remove the 3 training sentences from DM which appear in FN test data. 1497 Model Prec. Rec. F 1 Roth 72.2 68.0 70.0 Tackstrom 75.4 65.8 70.3 FitzGerald 74.8 65.5 69.9 FitzGerald ( 10 ) 75.0 67.3 70.9 open-SESAME 71.0 67.8 69.4 open-SESAME ( 5 ) 71.2 70.5 70.9 Yang and Mitchell (REL ) 77.1 68.7 72.7 Yang and Mitchell (ALL ) 78.8 74.5 76.6 This work (FULL ) 80.4 73.5 76.8 This work (FULL , 2 ) 80.4 74.7 77.4 This work (BASIC ) 79.2 71.7 75.3 This work (NOCTP) 76.9 74.8 75.8 Table 2 : FN 1.5 full structure extraction test performance. denotes the models jointly predicting frames and arguments, and other systems implement two-stage pipelines and use the algorithm by Hermann et al. (2014) to predict frames. K denotes a product-of-experts ensemble of K models. Ensembles a sequential tagging CRF and a relational model. Bold font indicates best performance among all systems. et al. (2017), denoted as NeurboParser (BASIC ). To ensure fair comparison with our FULL model, we made several modifications to their implementation ( 6.3). We observed performance improvements from our reimplementation, which can be seen in Table 5. Pruning strategies. For frame SRL, we discard argument spans longer than 20 tokens (Swayamdipta et al., 2017). We further pretrain an unlabeled model and prune spans with posteriors lower than 1 /n 2 , with n being the input sentence length. For semantic dependencies, we generally follow Martins and Almeida (2014), replacing their feature-rich pruner with neural networks. We observe that O ( n ) spans/arcs remain after pruning, with around 96% FN development recall, and more than 99% for DM. 12 6.1 Empirical Results FN parsing results. Table 2 compares our full frame-semantic parsing results to previous systems. Among them, Tackstrom et al. (2015) and Roth (2016) implement a two-stage pipeline and use the method from Hermann et al. (2014) to predict frames. FitzGerald et al. (2015) uses the 12 On average, around 0 . 8 n argument spans, and 5 . 7 n unlabeled dependency arcs remain after pruning. Model All Ambiguous Hartmann 87.6 Yang and Mitchell 88.2 Hermann 88.4 73.1 This work (BASIC ) 89.2 76.3 This work (NOCTP) 89.2 76.4 This work (FULL ) 89.9 77.7 This work (FULL , 2 ) 90.0 78.0 Table 3 : Frame identification accuracy on the FN 1.5 test set. Ambiguous evaluates only on lexical units having more than one possible frames. denotes joint frame and argument identification, and bold font indicates best performance. 13 same pipeline formulation, but improves the frame identification of Hermann et al. (2014) with better syntactic features. open-SESAME (Swayamdipta et al., 2017) uses predicted frames from FitzGerald et al. (2015), and improves argument identification using a softmax-margin segmental RNN. They observe further improvements from product of experts ensembles (Hinton, 2002). The best published FN 1.5 results are due to Yang and Mitchell (2017). Their relational model (REL ) formulates argument identification as a sequence of local classifications. They additionally introduce an ensemble method (denoted as ALL ) to integrate the predictions of a sequential CRF. They use a linear program to jointly predict frames and arguments at test time. As shown in Table 2, our single-model performance outperforms their REL model, and is on par with their ALL model. For a fair comparison, we build an ensemble (FULL , 2 ) by separately training two models, differing only in random seeds, and averaging their part scores. Our ensembled model outperforms previous best results by 0.8% absolute. Table 3 compares our frame identification results with previous approaches. Hermann et al. (2014) and Hartmann et al. (2017) use distributed word representations and syntax features. We follow the FULLLEXICON setting (Hermann et al., 2014) and extract candidate frames from the offi-13 Our comparison to Hermann et al. (2014) is based on their updated version: http://www.aclweb.org/ anthology/P/P14/P14-1136v2.pdf . Ambiguous frame identification results by Yang and Mitchell (2017) and Hartmann et al. (2017) are 75.7 and 73.8. Their ambiguous lexical unit sets are different from the one extracted from the official frame directory, and thus the results are not comparable to those in Table 3. 1498 Full Structure Frame Id. Model Prec. Rec. F 1 All Amb. BASIC 78.0 72.1 75.0 88.6 76.6 NOCTP 79.8 72.4 75.9 88.5 76.3 FULL 80.2 72.9 76.4 89.1 77.5 Table 4 : FN 1.7 full structure extraction and frame identification test results. Bold font indicates best performance. FN 1.7 test set is an extension of FN 1.5 test, hence the results here are not comparable to those reported in Table 2. Model id F 1 ood F 1 NeurboParser (BASIC ) 89.4 84.5 NeurboParser (FREDA 3) 90.4 85.3 NeurboParser (BASIC , reimpl.) 90.0 84.6 This work (NOCTP) 89.9 85.2 This work (FULL ) 90.5 85.9 This work (FULL , 2 ) 91.2 86.6 Table 5 : Labeled parsing performance in F 1 score for DM semantic dependencies. id denotes in-domain WSJ test data, and ood denotes out-of-domain brown corpus test data. Bold font indicates best performance. cial directories. The Ambiguous setting compares lexical units with more than one possible frames. Our approach improves over all previous models under both settings, demonstrating a clear benefit from joint learning. We observe similar trends on FN 1.7 for both full structure extraction and for frame identification only (Table 4). FN 1.7 extends FN 1.5 with more consistent annotations. Its test set is different from that of FN 1.5, so the results are not directly comparable to Table 2. We are the first to report frame-semantic parsing results on FN 1.7, and we encourage future efforts to do so as well. Semantic dependency parsing results. Table 5 compares our semantic dependency parsing performance on DM with the baselines. Our reim-plementation of the BASIC model slightly improves performance on in-domain test data. The NOCTP model ties parameters from word embeddings and LSTMs when training on FrameNet and DM, but does not use cross-task parts or joint prediction. NOCTP achieves similar in-domain test performance, and improves over BASIC on out-of-domain data. By jointly predicting FrameNet Rel. Err. (%) Operation Description BASICFULL Frame error Frame misprediction. 11.3 11.1 Role error Matching span with incorrect role. 12.6(5.2) 13.4(5.9) Span error Matching role with incorrect span. 11.4 12.3 Arg. error Predicted argument does not overlap with any gold span. 18.6 22.4 Missing arg. Gold argument does not overlap with any predicted span. 43.5 38.0 Table 6 : Percentage of errors made by BASIC and FULL models on the FN 1.5 development set. Parenthesized numbers show the percentage of role errors when frame predictions are correct. structures and semantic dependency graphs, the FULL model outperforms the baselines by more than 0.6% absolute F 1 scores under both settings. Previous state-of-the-art results on DM are due to the joint learning model of Peng et al. (2017), denoted as NeurboParser (FREDA 3). They adopted a multitask learning approach, jointly predicting three different parallel semantic dependency annotations. Our FULL model's in-domain test performance is on par with FREDA 3, and improves over it by 0.6% absolute F 1 on out-of-domain test data.", "Our ensemble of two FULL models achieves a new state-of-the-art in both in-domain and out-of-domain test performance.", "Error type breakdown.", "Similarly to He et al. (2017), we categorize prediction errors made by the BASIC and FULL models in Table 6.", "Entirely missing an argument accounts for most of the errors for both models, but we observe fewer errors by FULL compared to BASIC in this category.", "FULL tends to predict more arguments in general, including more incorrect arguments.", "Since candidate roles are determined by frames, frame and role errors are highly correlated.", "Therefore, we also show the role errors when frames are correctly predicted (parenthesized numbers in the second row).", "When a predicted argument span matches a gold span, predicting the semantic role is less challenging.", "Role errors account for only around 13% of all errors, and half of them are due to mispredictions of frames.", "Performance by argument length.", "Figure 3 plots dev.", "precision and recall of both BASIC and FULL against binned argument lengths.", "We ob-1499 0 1 2 3 4 5 6 0 .", "Figure 3 : FN 1.5 development precision and recall of BASIC and FULL by different argument lengths.", "Length is binned to b log 1 .", "6 c , and precision/recall values are smoothed with loess , with a smoothing parameter of 0.1.", "serve two trends:", "(a) FULL tends to predict longer arguments (averaging 3.2) compared to BASIC (averaging 2.9), while keeping similar precision; 14", "(b) recall improvement in FULL mainly comes from arguments longer than 4.", "Our implementation is based on DyNet (Neubig et al., 2017).", "15 We use predicted part-of-speech tags and lemmas using NLTK (Bird et al., 2009).", "16 Parameters are optimized with stochastic sub-gradient descent for up to 30 epochs, with 2 norms of gradients clipped to 1. We use 0.33 as initial learning rate, and anneal it at a rate of 0 . 5 every 10 epochs. Early stopping is applied based on FN development F 1 . We apply logarithm with base 2 to all discrete features, e.g., log 2 ( d + 1) for distance feature valuing d . To speed up training, we randomly sample a 35% subset from the FN exemplar instances each epoch. Hyperparameters. Each input token is represented as the concatenation a word embedding vector, a learned lemma vector, and a learned vector for part-of speech, all updated during training. We use 100-dimensional GloVe (Pennington et al., 2014) to initialize word embeddings. We apply word dropout (Iyyer et al., 2015) and randomly replace a word w with a special UNK symbol with probability 1+#( w ) , with #( w ) being the count of w in the training set. We follow the default parameters initialization procedure by DyNet, and an 2 14 Average gold span length is 3.4 after discarding those longer than 20. 15 https://github.com/clab/dynet 16 http://www.nltk.org/ Hyperparameter Value Word embedding dimension 100 (32) Lemma embedding dimension 50 (16) POS tag embedding dimension 50 (16) MLP dimension 100 (32) Tensor rank r 100 (32) BiLSTM layers 2 (1) BiLSTM dimensions 200 (64) for word dropout 1.0 (1.0) Table 7 : Hyperparameters used in the experiments. Parenthesized numbers indicate those used by the pretrained pruners. penalty of 10 6 is applied to all weights. See Table 7 for other hyperparameters. Modifications to Peng et al. (2017). To ensure fair comparisons, we note two implementation modifications to Peng et al.'s basic model. We use a more recent version (2.0) of the DyNet toolkit, and we use 50-dimensional lemma embeddings instead of their 25-dimensional randomly-initialized learned word embeddings. 7 Conclusion We presented a novel multitask approach to learning semantic parsers from disjoint corpora with structurally divergent formalisms. We showed how joint learning and prediction can be done with scoring functions that explicitly relate spans and dependencies, even when they are never observed together in the data. We handled the resulting inference challenges with a novel adaptation of graphical model structure learning to the deep learning setting. We raised the state-of-the-art on DM and FrameNet parsing by learning from both, despite their structural differences and non-overlapping data. While our selection of factors is specific to spans and dependencies, our general techniques could be adapted to work with more combinations of structured prediction tasks. We have released our implementation at https://github.com/ Noahs-ARK/NeurboParser . Acknowledgments We thank Kenton Lee, Luheng He, and Rowan Zellers for their helpful comments, and the anonymous reviewers for their valuable feedback. This work was supported in part by NSF grant IIS-1562364. 1500 References Apoorv Agarwal, Sriramkumar Balasubramanian, Anup Kotalwar, Jiehan Zheng, and Owen Rambow. 2014. Frame semantic tree kernels for social network extraction from text. In Proc. of EACL . Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proc. ACL . Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proc. LAW-ID . Steven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit . O'Reilly Media," ]
[ "objective", "abstain", "result", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "other", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other" ]
[ "In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks.", "Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods.", "However, such benchmarks are available only for a handful of languages.", "To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard.", "It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others.", "We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR).", "To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications.", "Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks.", "Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models.", "The field of natural language understanding (NLU) experienced a major shift towards knowledge reusability and transfer learning, a phenomenon well established in the field of computer vision.", "Such a shift was enabled by recent introduction of robust, general-purpose models suitable for fine-tuning, like ELMo (Peters et al., 2018), ULMFiT (Howard and Ruder, 2018) and BERT (Devlin et al., 2019).", "These models significantly improved the state-of-the-art on numerous language understanding tasks.", "Since then, the progress accelerated significantly and the new Transformer-based (Vaswani et al., 2017) models are being published every month to claim the latest state-of-the-art performance.", "Such a pace of development would not be possible without standardized and publicly available NLU evaluation benchmarks.", "Among the most popular ones is the recently introduced GLUE (Wang et al., 2019a) consisting of a collection of tasks such as question answering, sentiment analysis, and textual entailment with texts coming from a diverse set of domains.", "Some tasks come with numerous training examples, while others have limited training data.", "On top of that, for some tasks, the training set represents a different domain than the test set.", "This promotes models that learn general language representations and are effective at transferring knowledge across various tasks and domains.", "The GLUE benchmark is constructed based on existing datasets, and its main contribution is the careful choice of tasks together with an online evaluation platform and a leaderboard.", "Unfortunately, most of the progress in NLU is happening for English and Chinese.", "Other languages lack both pretrained models and evaluation benchmarks.", "In this paper, we introduce the comprehensive multi-task benchmark for the Polish language understanding KLEJ (eng. GLUE , also abbreviation for Kompleksowa Lista Ewaluacji Jezykowych , eng. Comprehensive List of Language Evaluations ).", "KLEJ consists of nine tasks and, similarly to GLUE, is constructed mostly out of existing datasets.", "The tasks were carefully selected to cover a wide range of genres and different aspects of language understanding.", "Following GLUE, to simplify a model evaluation procedure, we adjusted the tasks to fit into a unified scoring scheme (ei-ther text classification or regression).", "Alongside the benchmark, we introduce HerBERT, a Transformer-based model trained on several Polish text corpora.", "We compare HerBERT with a set of both standard and recently introduced NLU baselines.", "To summarize, our contributions are:", "1. KLEJ: A set of nine tasks constructed from both existing and newly introduced datasets used for the Polish language understanding evaluation,", "2. An online platform 1 to evaluate and present the model results in the form of a leaderboard,", "3. HerBERT: Transformer-based model for the Polish language understanding,", "4. Allegro Reviews: A new sentiment analysis task for the e-commerce domain,", "5. Evaluation of several LSTM-based baselines, multilingual Transformer-based models and HerBERT.", "The rest of the paper is organized as follows.", "In Section 2, we provide an overview of related work.", "In Section 3, we describe the tasks that make up the KLEJ benchmark.", "In Section 4, we give an overview of the selected baseline methods and introduce the new Transformer-based model for Polish.", "In Section 5, we evaluate all models using KLEJ benchmark.", "Finally, we conclude our work in Section", "6. 2 Related Work The evaluation of NLU models was always an integral part of their development.", "Even though there are many established tasks on which to evaluate newly proposed models, there is no strict standard specifying which one to choose.", "The difficulty of a fair comparison between models eventually led to the introduction of multi-task benchmarks that unify the evaluation.", "One such benchmark is SentEval (Conneau and Kiela, 2018).", "It consists of seventeen established tasks used to evaluate the quality of sentence embeddings.", "Additionally, ten probing tasks are provided to detect what linguistic properties are retained in sentence embeddings.", "In all tasks, models 1 https://klejbenchmark.com take either a single sentence embedding or a pair of sentence embeddings as the input and solve a classification (or a regression) problem.", "The authors released a toolkit 2 for model evaluation.", "However, they do not provide a public leaderboard to compare the results of different models.", "Another benchmark for evaluating models is de-caNLP (McCann et al., 2018), which consists of ten pre-existing tasks.", "In contrast to SentEval, choice of tasks is much more diverse, ranging from machine translation, semantic parsing to summarization.", "All tasks have been automatically converted to a question answering format.", "Finally, the GLUE benchmark (Wang et al., 2019a) proposes a set of nine tasks.", "All of them are constructed from existing, well-established datasets.", "Authors selected tasks that are more diverse and more difficult than SentEval.", "Otherwise, the design of the benchmark is similar to SentEval.", "The aforementioned benchmarks are limited to the English language.", "Noteworthy attempts at providing multi-language benchmarks include XNLI dataset (Conneau et al., 2018), with the MNLI (Williams et al., 2018) dataset translated by professional translators into 14 languages.", "A similar effort is XQuAD (Artetxe et al., 2019) which is a translation of the SQuAD dataset (Rajpurkar et al., 2016) into 10 languages.", "None of these efforts includes Polish.", "Other resources to evaluate the Polish language understanding models are scarce.", "Recently, Krasnowska-Kieras and Wrblewska (2019) prepared their version of the SentEval probing tasks for the Polish language.", "However, it is more suited for analyzing the sentence embeddings linguistic properties than assessing their quality.", "The PolEval (Wawer and Ogrodniczuk, 2017; Kobylinski and Ogrodniczuk, 2017; Ogrodniczuk and Kobyli nski, 2018, 2019) 3 platform organizes an annual competition in natural language processing for the Polish language.", "During the first three editions, it assembled 11 diverse tasks and attracted over 40 teams.", "It could serve as the natural benchmark for the Polish language understanding, but it lacks the common interface for all tasks, making it difficult and time-consuming to use.", "We include one of the PolEval tasks into the KLEJ Benchmark.", "Recently Dadas et al. (2019) introduced a benchmark similar to the KLEJ benchmark proposed in 2 https://github.com/facebookresearch/ SentEval 3 http://poleval.pl Name Train Dev Test Domain Metrics Objective Single-Sentence Tasks NKJP-NER 16k 2k 2k Balanced corpus Accuracy NER classification CDSC-R 8k 1k 1k Image captions Spearman corr.", "this paper.", "It contains two sentiment analysis tasks, topic classification and a Polish translation of the SICK dataset (Marelli et al., 2014).", "Similarly to their work, we use the same sentiment analysis dataset, but transform it into a more difficult task.", "We also use the analogous dataset to SICK but created from scratch for the Polish language.", "Finally, we considered the topic classification task to be too easy to include into the proposed benchmark.", "Overall, KLEJ benchmark consists of nine tasks.", "They are more diverse, cover a wider range of objectives and evaluate not only single sentences but also whole paragraphs.", "KLEJ consists of nine Polish language understanding tasks.", "Similarly to GLUE, we choose tasks from different domains and with different objectives.", "In contrast to previous benchmarks, we include several tasks that take multiple sentences as input.", "We decided to focus on tasks which have relatively small datasets most of them have less than 10k training examples.", "Moreover, some tasks require extensive external knowledge to solve them.", "Such a setup promotes knowledge transfer techniques like transfer learning, instead of training separate models for each task from scratch.", "In effect, KLEJ supports the goal of creating a general model for the Polish language understanding.", "We present all tasks in the following sections and summarize them in Table", "1. 3.1 NKJP-NER We use the human-annotated part of the NKJP ( Narodowy Korpus Jezyka Polskiego , eng. National Corpus of Polish ) (Przepirkowski, 2012) to create the named entity classification task.", "The original dataset consists of 85k sentences, randomly selected from a much larger, balanced and representative corpus of contemporary Polish.", "We use existing human-annotations of named entities to convert the dataset into a named entity classification task.", "First, we filter out all sentences with entities of more than one type.", "Then, we randomly assigned sentences into training, development and test sets in such a way, that each named entity appears only in one of the three splits.", "We decided to split the sentences based on named entities to make the task more difficult.", "To increase class balance, we undersample the persName class and merge date and time classes.", "Finally, we sample sentences without any named entity to represent the noEntity class.", "The final dataset consists of 20k sentences and six classes.", "The task is to predict the presence and type of a named entity.", "Although the named entity classification task differs from traditional NER task, it has a comparable difficulty and evaluates similar aspects of language understanding.", "At the same time, it follows the common technical interface as other KLEJ tasks, which makes it easy to use.", "We use accuracy for evaluation.", "The Compositional Distributional Semantics Corpus (Wrblewska and Krasnowska-Kieras, 2017) consists of pairs of sentences which are human-annotated for semantic relatedness and entailment.", "Although the main design of the dataset is inspired by SICK, it differs in details.", "As in SICK, the sentences come from image captions, but the set of chosen images is much more diverse as they come from 46 thematic groups.", "We prepared two KLEJ tasks based on the CDS Corpus.", "The first task is to predict relatedness between a pair of sentences, ranging from 0 (not related) to 5 (very related).", "The score is the average of scores assigned by three human annotators.", "We use the Spearman correlation to measure the performance of the model.", "The second task uses the textual entailment annotations to predict if the premise entails the hypothesis (entailment), negates the hypothesis (contra-diction), or is unrelated (neutral).", "Even though there is an imbalanced label distribution (most of them are neutral) we follow Krasnowska-Kieras and Wrblewska (2019) and use accuracy as an evaluation metric.", "The Cyberbullying Detection task (Ptaszynski et al., 2019) was a part of the 2019 edition of the PolEval competition 4 .", "The goal is to predict whether a given Twitter message is a case of cyberbullying.", "We use the dataset as-is and use F1-Score to measure the performance of a given model, following the original design of the task.", "The PolEmo2.0 (Kocon et al., 2019) is a dataset of online consumer reviews from four different domains, namely: medicine, hotels, products and university.", "It is human-annotated on a level of full reviews, as well as individual sentences.", "It consists of over 8000 reviews, about 85% of which are from the medicine and hotel domains.", "Each review is annotated with one of four labels: positive, negative, neutral or ambiguous.", "The task is to predict the correct label.", "We use the PolEmo2.0 dataset to form two tasks.", "Both of them use the same training dataset, i.e. reviews from medicine and hotel domains, but are evaluated on a different test set.", "In the first task, we use accuracy to evaluate model performance within the in-domain context, i.e. on a test set of reviews from medicine and hotels domains.", "In the second task, we test the model on out-of-domain reviews, i.e. from product and university domains.", "Since the original test sets for those domains are scarce (50 reviews each) we decided to use the original out-of-domain training set of 900 reviews for testing purposes and create the new split of development and test sets.", "As a result, the task consists of 1000 reviews, which is comparable in size to the in-domain test dataset of 1400 reviews.", "The Czy wiesz?", "(eng. Did you know?) dataset (Marcinczuk et al., 2013) consists of almost 5k question-answer pairs obtained from Czy wiesz... section of Polish Wikipedia.", "Each question is written by a Wikipedia collaborator and is answered with a link to a relevant Wikipedia article.", "The authors of the dataset used it to build a Question Answering system.", "Then, they evaluated the system using 1.5k questions.", "For each question, they took the top 10 system responses and manually annotated if the answer was correct.", "Positive responses to 250 questions were further processed and only relevant continuous parts of responses were selected by human annotators.", "Following this procedure, we have manually extracted shorter responses from the remaining positive examples.", "Finally, we used these annotations to create positive question-answer pairs.", "To select the most difficult negative answers, we used the byte-pair encoding (BPE) (Rico Sennrich and Birch., 2016) token overlap between a question and a possible answer.", "For each question, we took only four most similar negatives and removed ones with a similarity metric score below the threshold = 0 .", "3 .", "On average, the negative answers were much longer than the positive ones.", "Since it could be potentially exploited by the model, we decided to balance the length of the positive and negative answers.", "The task is to predict if the answer to the given question is correct or not.", "Since the dataset is highly imbalanced, we chose F1-score metric.", "The Polish Summaries Corpus (PSC) (Ogrodniczuk and Kopec, 2014) is a dataset of summaries for 569 news articles.", "For each article, the human annotators created five extractive summaries by choosing approximately 5% of the original text.", "Each summary was created by a different annotator.", "The subset of 154 articles was also supplemented with additional five abstractive summaries each, i.e. not created from the fragments of the original article.", "Based on PSC we formulate a text-similarity task.", "We generate the positive pairs (i.e. referring to the same article) using only those news articles which have both extractive and abstractive summaries.", "We match each extractive summary with two least similar abstractive ones of the same article.", "We use the same similarity metric as in the preparation of the Czy wiesz?", "dataset, calculating the BPE token overlap between the extractive and abstractive summary.", "To create negative pairs, we follow a similar procedure.", "For each extractive summary, we find two most similar abstractive summaries, but from different articles.", "We remove examples with similarity below the threshold = 0 .", "15 .", "To increase the difficulty and diversity of the task, we filter out multiple abstracts from the same article.", "As a result, there is at most one negative pair created from each pair of articles.", "In total, we obtain around 4k examples.", "We randomly split the dataset into train and test based on the articles of the extracts to further increase the task's difficulty.", "For evaluation, we use F1-score.", "We introduce a new sentiment analysis dataset, named Allegro Reviews (AR), extracting 12k product reviews from Allegro.pl a popular e-commerce marketplace.", "Each review is at least 50 words long and has a rating on a scale from one (negative review) to five (positive review).", "The task is to predict the rating of a given review.", "To counter slight class imbalance in the dataset, we propose to evaluate models using wMAE , i.e. macro-average of the mean absolute error per class.", "Additionally, we transform the rating to be between zero and one and report 1 wMAE to ensure consistent metric interpretation between tasks.", "In this section, we present an overview of several baseline models, which we evaluated using the KLEJ benchmark.", "We divide these models into three main groups: (1) the LSTM-based (Hochre-iter and Schmidhuber, 1997) models using pretrained word embeddings, (2) models based on Transformer architecture and (3) BERT model trained on Polish corpora.", "We also include the simple baseline by sampling targets from a training set.", "We chose the standard Bidirectional LSTM text encoder as the base architecture.", "Following the GLUE experiments setup (Wang et al., 2019a) we trained it jointly as the multi-task learner on all KLEJ tasks.", "The architecture consists of two parts: a shared sentence encoder and a task specific classifier.", "The sentence representation model is a two layer BiLSTM with 1024 hidden units, 300 dimensional word embeddings and max pooling.", "The classifier is an MLP with 512 dimensional hidden layer.", "We perform training in two stages.", "First, we pretrain the whole model in a multi-task scheme.", "In the second stage, we freeze the sentence encoder and fine-tune the classifiers separately for each task.", "The initial learning rate in both phases was set to 10 4 with linear decay down to 10 5 .", "Pretraining progress is measured by the macro average of all task metrics.", "We train models with a batch size of 128, except for the ELMo version, which is trained with a batch size of 64.", "For tasks without development set, we use 10% of training examples as validation data.", "We used jiant (Wang et al., 2019b) library to train the LSTM-based models and report the median performance of 5 runs.", "The simplest version of the LSTM-based models is a BiLSTM sentence encoder with an MLP classifier", "Before contextual word embeddings became widely adopted, models were enhanced with pretrained word vectors.", "To evaluate their impact on KLEJ tasks, we initialize word embeddings with fastText (Bojanowski et al., 2016) trained on Common Crawl and Wikipedia for Polish language (Grave et al., 2018).", "ELMo (Peters et al., 2018) is a bidirectional language model using character-level convolutions.", "In contrast to fastText, ELMo's embeddings capture word-level semantics in a context of the whole sentence.", "We conducted more thorough experiments with ELMo embeddings.", "During the fine-tuning stage in training on a downstream KLEJ task, we mod-ified the sentence encoder parameters and trained the entire architecture with only a word embed-ding's weights unmodified.", "Additionally, we experimented with the attention mechanism (Conneau et al., 2017) between all words in tasks with a pair of sentences.", "We use publicly available pretrained ELMo weights for Polish language (Janz, 2019).", "Recently, the best results on the GLUE benchmark were obtained by Transformer-based models inspired by the Bidirectional Encoder Representations (BERT) model.", "All of them are pretrained on large text corpora using some variant of Masked Language Model (MLM) objective.", "In this section, we describe three such models: Multilingual BERT, XLM (Lample and Conneau, 2019) and Slavic-BERT (Arkhipov et al., 2019).", "At the time of writing this paper, these are the only available Transformer-based models that were trained with Polish text.", "To evaluate these models we fine-tune them on each task separately.", "For training we used the transformers (Wolf et al., 2019) library.", "All models were trained for 4 epochs with a batch size of 32 and using a linearly decaying learning rate scheme starting at 2 10 5 with a 100 iteration warm-up.", "We use Adam optimizer with parameters: 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 10 8 .", "We report the median performance of 5 runs.", "The BERT is a popular model based on the Transformer architecture trained using MLM and Next Sentence Prediction (NSP) objectives.", "We use the Multilingual Cased BERT model, which was trained on 104 languages (including Polish), selecting ones with the largest among all Wikipedia corpora.", "It uses the shared WordPiece (Wu et al., 2016) tokenizer with the vocabulary size of 110k.", "The Cross-lingual Language Model (XLM) is based on BERT.", "It differs from BERT in that it does not use NSP objective, has more layers (16 vs 12), more attention heads (16 vs 12), larger hidden layers size (1280 vs 768) and a larger vocabulary (200k vs 110k).", "Moreover, the vocabulary is learned on a corpus for which the most popular languages were undersampled to balance the number of tokens between highand low-resource languages.", "We use the XLM-17 model, which was trained on Wikipedia for 17 languages (including Polish).", "The Slavic-BERT is a BERT model trained on four Slavic languages (Polish, Czech, Russian, and Bul-garian).", "Contrary to previous models, Arkhipov et al. (2019) used not only Wikipedia but also the Russian News corpus.", "To avoid costly pretraining, the model was initialized with Multilingual BERT.", "None of the above models was optimized for Polish and all of them were trained on Wikipedia only.", "We decided to combine several publicly available corpora and use them to train a Transformer-based model specifically for the Polish language.", "In this section, we describe the corpora on which we trained our model.", "Due to copyright constraints we were not able to use the National Corpus of Polish (NKJP), the most commonly known Polish corpus.", "Instead, we combined several other publicly available corpora and created a larger, but less representative corpus.", "Wikipedia Polish version is among the top 10 largest Wikipedia versions.", "However, it is still relatively small compared to the English one (260M vs 3700M words).", "To extract a clean corpus from the raw Wikipedia dump, we used the tools provided by XLM.", "5 However, we did not lowercase the text and did not remove diacritics.", "Wolne Lektury (eng. Free Readings ) is an online repository of over 5k books, written by Polish authors or translated into Polish.", "Although the majority of the books in the dataset were written in the 19th or 20th century and they might not be fully representative of the contemporary Polish, they are free to download and can be used as a text corpus.", "Open Subtitles is a multilingual parallel corpus based on movie and TV subtitles (Lison and Tiede-mann, 2016) from the opensubtitles.org website.", "As a result, it contains very specific, mostly conversational text consisting of short sentences.", "Since the translations are community-sourced, they may be of substandard quality.", "The Polish part of the dataset is relatively large compared to the other corpora (see Table 2).", "OSCAR is a multilingual corpus created by Ortiz Surez et al. (2019) based on Common Crawl 6 .", "The original dataset lacks information about the language used in particular documents.", "Categorization to specific languages was automated by a classifier, splitting whole Common Crawl into many monolingual corpora.", "Duplicates were removed from the dataset to increase its quality.", "We 5 https://github.com/facebookresearch/ XLM 6 http://commoncrawl.org/ only use the Polish part of the corpus and use texts longer than 100 words.", "Allegro Articles Additionally, we obtained over 30k articles from Allegro.pl a popular e-commerce marketplace.", "They contain product reviews, shopping guides and other texts from the e-commerce domain.", "It is the smallest corpus we've used, but it contains high-quality documents from the domain of our interest.", "Architecture HerBERT is a multi-layer bidirectional Transformer.", "We use BERTBASE architecture configuration with 12 layers, 12 attention heads and hidden dimension of 768.", "Loss We train HerBERT with a MLM objective.", "According to the updated version of BERT, we always mask all tokens corresponding to the randomly picked word.", "Whole word masking objective is more difficult to learn than predicting subword tokens (Joshi et al., 2019; Martin et al., 2019).", "In the original BERT training setup tokens are masked statically during the text preprocessing phase.", "In HerBERT, we chose to use dynamic token masking, which follows the training setup of the RoBERTa model (Liu et al., 2019).", "We decided not to use the NSP objective.", "Previous studies by Yang et al. (2019) and Liu et al. (2019) showed that this objective is too easy and does not improve performance on downstream tasks.", "Data preprocessing We tokenize corpus data into subword tokens using BPE.", "We learn BPE splits on Wolne Lektury and a publicly available subset of National Corpus of Polish.", "We choose these two datasets because of their higher quality compared to the rest of our corpus.", "We limit the vocabulary size to 50k tokens.", "Our datasets contain a lot of small fragments of coherent text that should be treated as separate documents.", "We remove degenerated documents that consist of less than 20 tokens from available corpora.", "Maximal segment length is 512 as it was originally proposed in BERT.", "We do not accumulate short examples into full 512 token segments because such sequences would be incoherent with frequent topic changes.", "The only exception to this rule is the Open Subtitles dataset, where subsequent parts of dialogues were connected to form larger documents.", "The aforementioned training Model AVGNKJPNERCDSCECDSCR CBDP o l E m o2 .", "setup gives us a slightly better performance on downstream tasks than simply selecting all available data.", "Hyperparameters We train HerBERT using Adam optimizer (Kingma and Ba, 2014) with parameters: 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 10 8 .", "We use learning rate burn-in over the first 500 steps, reaching a peak value of 10 4 ; the learning rate is then linearly decayed for the rest of the training.", "We train the model with a batch size of 570 .", "HerBERT was trained for 180 k steps, without showing signs of saturation.", "We first compare models based on their average performance.", "Even though it is not the definite metric to compare models, especially as not all tasks are equally difficult, it gives a general notion of the model performance across all tasks.", "In comparison to baselines based on the LSTM architecture, Transformer-based models clearly show superior performance.", "The only exception is ELMo, which achieves competitive results on many tasks.", "On two of them, the fine-tuned ELMo model achieves the best score.", "In general, the evaluation shows major shortcomings of multilingual pretrained BERT models for Polish, and possibly other low resource languages.", "Overall, every LSTM-based baseline is still on average worse than any of the tested Transformer-based models.", "The KLEJ benchmark was designed to require additional knowledge to promote general language understanding models.", "As expected, we observe significant increases of models quality when using pretrained word embeddings.", "The vanilla LSTM model achieves the average score of 63.0 while supplying it with the fastText embeddings boosts performance to 67.7.", "Usage of more recent, contextualized embeddings (ELMo) increases the score to 76.6.", "Focusing on fewer languages seems to result in a better model.", "The Slavic-BERT has higher scores than Multi-BERT on seven out of nine tasks.", "However, without a detailed ablation study, it is difficult to infer the main reason resulting in better performance.", "It can also be attributed to a better tokenizer, additional Russian News corpus or longer training (the Slavic-BERT was initialized with Multi-BERT weights).", "The training corpus seems to play an important role in the performance of a downstream task.", "Both HerBERT and ELMo models were trained mainly on web crawled texts and they excel at tasks from an online domain ( CBD , PolEmo-IN , PolEmo-OUT , and AR ).", "On the other hand, the other Transformer-based models are superior on the Czy wiesz?", "task.", "It can be related to the fact that it is a Wikipedia-based question-answering task and the aforementioned models were trained mainly on Wikipedia corpus.", "Interestingly, the Slavic-BERT, which was additionally trained on Russian News corpus, has a lower score on the Czy wiesz?", "task than Multi-BERT and XLM-17.", "HerBERT achieves highly competitive results compared to the other Transformer-based models.", "It has the best performance on average and achieves state-of-the-art results on three tasks, PolEmo-IN , PolEmo-OUT and CBD .", "Moreover, HerBERT has the smallest performance gap between PolEmo-IN and PolEmo-OUT , which suggests better generalization across domains.", "Compared to the other Transformer-based models it performs poorly on Czy wiesz?", "and PSC tasks.", "The KLEJ benchmark proved to be challenging and diverse.", "There is no clear winner among evaluated models; different models perform better at different tasks.", "It suggests that the KLEJ benchmark is far from being solved, and it can be used to evaluate and compare future models.", "We introduce the KLEJ benchmark, a comprehensive set of evaluation tasks for the Polish language understanding.", "Its goal is to drive the development of better NLU models, so careful selection of tasks was crucial.", "We mainly focused on a variety of text genres, objectives, text lengths, and difficul-ties, which allows us to assess the models across different axes.", "As a result, KLEJ benchmark proves to be both challenging and diverse, as there is no single model that outperforms others on all tasks.", "We find it equally important to provide a common evaluation interface for all the tasks.", "For that purpose, many existing resources had to be adapted, either automatically ( NKJP-NER , PSC ) or manually ( Czy wiesz? ), to make it easier to use.", "It's worth mentioning that the main weakness of creating such benchmarks is focusing only on the model performance and not the model efficiency, e.g. in terms of training data, speed or a number of parameters.", "It seems reasonable to derive additional benchmarks by requiring a given level of efficiency from participating models.", "We leave it as future work.", "We also present HerBERT, a Transformer-based model trained specifically for Polish and compare it with other LSTMand Transformer-based models.", "We find that it is the best on average and achieves highest scores on three tasks.", "We plan to continue the work on HerBERT and use the KLEJ benchmark to guide its development." ]
[ "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "objective", "objective", "objective", "objective", "objective", "objective", "abstain", "method", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "result", "method" ]
[ "Publicly available, large pretrained Language Models (LMs) generate text with remarkable quality, but only sequentially from left to right.", "As a result, they are not immediately applicable to generation tasks that break the unidirectional assumption, such as paraphrasing or text-infilling, necessitating task-specific supervision.", "In this paper, we present REFLECTIVEDECODING , a novel unsupervised algorithm that allows for direct application of unidirectional LMs to non-sequential tasks.", "Our 2-step approach requires no supervision or even parallel corpora, only two off-the-shelf pretrained LMs in opposite directions: forward and backward .", "First, in the contextualization step, we use LMs to generate ensembles of past and future contexts which collectively capture the input (e.g. the source sentence for paraphras-ing).", "Second, in the reflection step, we condition on these context ensembles, generating outputs that are compatible with them.", "Comprehensive empirical results 1 demonstrate that REFLECTIVEDECODING outperforms strong unsupervised baselines on both paraphrasing and abductive text infilling, significantly narrowing the gap between unsupervised and supervised methods.", "REFLECTIVEDECODING surpasses multiple supervised baselines on various metrics including human evaluation.", "Language Models (LMs) like GPT-2 (Radford et al., 2019), trained over vast unstructured data, can leverage enhanced generation methods (Holtzman et al., 2020; Martins et al., 2020; Welleck et al., 2019) to give fluent and coherent continuations to given input texte.g. news articles or stories.", "1 Further results and resource are available at https://homes.cs.washington.edu/pawest/ReflectiveDecoding.html !", "GPT-3 (Brown et al., 2020) takes this a step further: given a small number of examples and a well-constructed prompt, it shows remarkable performance on tasks where vast quantities of supervised data and finetuning were thought to be necessary.", "While this demonstrates the potential for LM-decoding in few-shot or even zero-shot out-of-the-box settings, limited access to GPT-3 and immense computational cost keep this from being a widely usable or efficient solution.", "Liang (2021) achieve supervised-level performance in a few-shot setting using smaller, accessible models like GPT-2.", "They learn a small number of task-specific vectors as a prefix to the input, without tuning the model itself.", "Off-the-shelf GPT-2 is capable of few-shot learning given the right setup; our work aims to push this concept further, by showing that out-of-the-box LMs can solve complex generation problems simply by using the right decoding algorithm.", "We introduce REFLECTIVEDECODING a novel decoding method that allows LMs to be applied to generation tasks that break the text con-tinuation paradigm, such as paraphrasing and text-infilling.", "REFLECTIVEDECODING requires no supervision, only two complementary off-the-shelf LMsone forward ( LM) and one backward ( LM).", "That means no per-task finetuning, even on unstructured text in the target domain.", "Inspired by the distributional hypothesis (Firth, 1957), REFLECTIVEDECODING works by generating text that might occupy the same contexts as an input.", "We use two LMs ( LM and LM) to generate contexts for a given input, which implicitly capture aspects of its meaning (the contextualization step).", "Then, in the reflection step, we condition on this ensemble of contexts, decoding over the input with generations that are distributionally related toor replacethe input.", "Paraphrasing is a natural application: a good paraphrase should intuitively be compatible with the same contexts as the original text.", "REFLECTIVEDECODING shows strong unsupervised paraphrasing performance: On the Quora question pair dataset, we find one variant of our model (RD 30 ) outperforms unsupervised baselines on all but one metric, and supervised baselines on both the SARI metric and human evaluation.", "We see the same trends on the Twitter URL corpus (Lan et al., 2017).", "REFLECTIVEDECODING can also be applied to tasks that only replace part of the input, or generate within it, like infilling ; on NLG (Bhagavatula et al., 2020), we outperform the best unsupervised baseline on overall quality, effectively halving the gap with supervised methods.", "In both applications, REFLECTIVEDECODING directly applies off-the-shelf pretrained models, without finetuning on the task or target domain.", "This provides evidence that off-the-shelf Language Models can excel at surprising applications, when paired with decoding algorithms designed to elicit specific kinds of information.", "Arrows indicate the order in which sampling functions condition on and generate tokens: indicates generating from the left-most token to the right (left-to-right), while indicates going right-to-left.", "For Language Models (LMs), this means LM is what is often called a forward LM, while LM is a backward LM.", "For our sampling function (RD), this also indicates which generated context is being conditioned on, e.g. RD conditions on left context, extending it to the right to generate output.", "Currently, LM-decoding is limited to a text continuation paradigm.", "Given an input text s input , LM ( c | s input ) generates contexts c that might come after (forward, i.e. LM) or before (backward, i.e. LM) the input.", "LM-decoding generates outside of the input by continuing it, but many tasks require us to generate over or within the input: paraphrasing requires reformulating the input, while infilling requires inserting text in the middle of it.", "Reflective Decoding approaches this shortcoming by turning conventional LM-decoding around.", "While LM ( c | s input ) generates the kinds of contexts c the input might appear in, RD generates s that might replace s input in these same contexts.", "The distributional hypothesis (Firth, 1957) suggests semantically similar texts appear in similar contexts, meaning RD is also likely to sample in the semantic neighborhood of s input .", "Concretely, REFLECTIVEDECODING samples s that fits the same contexts as s input in 2 simple steps.", "We first sample many representative contexts c i that could neighbor the input, e.g. using LM in Figure", "1. This is the contextualization step .", "Second, in the reflection step , we generate text in the opposite direction (using LM in Figure 1), which fits these contexts as well as s input fits them.", "To consider all c i 's while decoding, we ensemble the different distributions imposed by conditioning on each c i : RD (s) = (cid:81) i LM ( s | c i ) w i Z ( s, c, w ) (1) where Z normalizes the fraction to a proper probability distribution (see Equation 2).", "In essence, this Algorithm 1: Learn REFLECTIVEDECODER RD Input: Forward language model LM Backward language model LM Source text s input 1: Sample contexts, c 1 ...c n c LM ( c | s input ) 2: Initialize parameters w = w 1 ...w n c s.t. (cid:80) w i = 1 , w i 0 3: learn w = arg max w RD ( s input ) s.t. (cid:80) w i = 1 , w i 0 Output: RD ensemble RD restricts generations to fit all contexts c i .", "Reversed RD is the same, except it uses LM with left contexts c i generated by LM.", "By ensembling the contexts in a Product of Experts (Hinton, 2002) framework, we can generate a hypothesis s that fits the full contextual fingerprint.", "Yet, some contexts are more informative than others: probable but generic contexts like See the appendix for details. are not descriptive of neighboring text.", "We learn weights w i to prioritize contexts c i in the ensemble that are most informative for s input , by maximizing the probability of s input under Equation 1 (described in Algorithm 1).", "In effect, we are learning an on-the-fly autoencoder at inference time, using weighted ensembles of contexts as a representation (see 2.7, A.1).", "To motivate how this method functions, consider the paraphrasing example from Figure 1 with input s input = How are circulatory system tissues formed?", "Generated contexts reflect different aspects of s input : c 1 situates s input as a question ( This is a medical question... ), while c 2 and c 3 explore central concepts ( as with all tissue... ; about the circulatory system ).", "Even though each context could follow many sentences, together they form a fingerprint for s input .", "A sentence that could be followed by all of c 1 , c 2 , c 3 will likely be a question ( c 1 ), about tissue formation ( c 2 ), and the circulatory system ( c 3 ), and generally occupy the same semantic neighborhood as s input , e.g. How do circulatory systems form?", "In the case of paraphrasing, our task is to replace all of s input with something that might appear in the same contexts.", "Other tasks, however, might require us to replace only part of a sentence (e.g. in-context paraphrasing) or even insert text at a given position (e.g. infilling).", "REFLECTIVEDECODING makes this easy: simply hold part of s input static when we generate from RD. 2.3 REFLECTIVEDECODING Here we dive into the details of REFLECTIVEDECODING , by considering the right-hand context ensemble ( RD), keeping in mind that the process is repeated on the left-hand as well ( RD).", "First, in the contextualization step (line 1 of Algorithm 1), we sample many right-hand contexts c i for s input , using LM.", "These will be used as a representative sample of the contexts s input appears in.", "Second, in the reflection step (lines 2 & 3) our goal is to construct a sampling function RD that will yield texts similar to s input .", "We define RD as: RD ( s ) = (cid:81) i LM ( s | c i ) w i (cid:81) | s | j =0 (cid:80) t V (cid:81) i LM ( t | s j +1: | s | + c i ) w i (2) This is equivalent to Equation 1, but giving the exact normalization factor in the denominator.", "Equation 2 is a token-wise Product of Experts model, that captures the semantic neighborhood of s input via the combination of contexts c i and their weights w i (2.7).", "We learn w i that maximize RD ( s input ) (probability of generating s input under RD), thereby up-weighting contexts specific to s input .", "We initialize these weights (line 2), then train them (line 3) using the Adam optimizer (Kingma and Ba, 2014).", "We normalize weights into a proper probability distribution at every step.", "Reverse-direction RD is learned symmetrically, flipping the roles of LM and LM and sampling left-hand context instead (see B.1 for details).", "Finally, we generate from RD (and RD), sampling outputs that would appear in the same contexts as s input .", "Depending on the application, we rank and select a final output in different ways, always using LM and LM together to capture bidirectional fit.", "Weight Learning and Pruning Context weights w i are learned using the Adam optimizer (Kingma and Ba, 2014).", "In practice this takes under 100 steps (negligible time compared to LM decoding).", "While we sample tens of contexts (line 1 of Algorithm 1), many end up with negligible weight under the learned distribution (Equation 2).", "To efficiently sample from RD and RD, we drop all but the top k c contexts and renormalize weights: k c < n c contexts are used during the reflection step.", "Parameters We sample n c contexts to describe the source s input .", "We use nucleus sampling (Holtz-Task: !", "man et al., 2020) with parameter p c , and a maximum length of len c .", "Once RD and RD are learned, we sample n s generations from each, of length len s .", "We again use nucleus sampling, but choose p s per-example to account for vastly different entropy in RD (B.3).", "Values for all hyperparameters are available in B.4.", "Language Models We train large forward ( LM) and backward ( LM) Language Models based on GPT-2 (Radford et al., 2019) using the OpenWeb-Text training corpus (Gokaslan and Cohen, 2019) 2 .", "Our implementation details follow those of past work retraining GPT-2 (Zellers et al., 2019).", "To paraphrase, we begin by generating candidate outputs.", "Following 2.3 the REFLECTIVEDECODING sampling function is learned in each direction ( RD, RD) using the source sentence s input .", "Then, n s generations are sampled from both RD and RD: s 1 , ..., s n s RD , s n s +1 , ..., s 2 n s RD This gives a robust set of candidates that are compatible with the same left and right contexts as s input .", "Many of these will be semantically related to s input , but must be scored and ranked in order to select true paraphrases.", "REFLECTIVEDECODING is based on the notion that good fit with the same contexts is a robust measurement of similarity, yielding a natural contextual scoring function (Equation 7 and 2.7).", "We measure how likely candidate s is to generate the same contexts that s input did when constructing RD and RD: score ( s ) = 1 n c (cid:88) c rh LM ( c rh | s )+ 1 n c (cid:88) c lh LM ( c lh | s ) (3) where c rh are the generated contexts used in RD, 2 https://github.com/yet-another-account/openwebtext and c lh for RD.", "This explicitly estimates how similar the contexts of s and s input are on both sides, the underlying objective of REFLECTIVEDECODING .", "Abductive natural language generation ( NLG from Bhagavatula et al. 2020) is the task of fill-ing in the blank between 2 observations o 1 and o 2 , with a hypothesis h that abductively explains them.", "The challenge for LM-decoding is making use of context from both sides ( o 1 on the left and o 2 on the right).", "This is particularly challenging for unsupervised decoding methods because unidirectional LMs cannot naturally condition on both sides when generating h .", "REFLECTIVEDECODING simplifies this problem by capturing information about both o 1 and o 2 in a single decoding function ( RD or RD), then holding o 1 and o 2 static at generation time (i.e. teacher forcing).", "Concretely, we use concatenated o 1 + o 2 as s input in Algorithm 1, and construct sampling functions RD, RD informed by both observations.", "We are interested in sampling in between o 1 and o 2 , so when sampling hypotheses h from RD we condition on the right-side observation o 2 (and vice-versa for RD and o 1 ).", "This is equivalent to appending the given observation to sampled contexts: h 1 , ..., h n s RD ( h | o 2 ) h n s +1 , ..., h 2 n s RD ( h | o 1 ) (4) Note that both RD and RD contain information about both o 1 and o 2 , effectively turning a 2-sided contextual constraint into a 1-sided one.", "We also use a task-specific scoring function to rank sampled hypotheses.", "We would like a hypothesis h that best explains both observations, and so use Language Models to measure this: score ( h ) = LM ( o 1 | h + o 2 )+ LM ( o 2 | o 1 + h ) (5) Adding h should help to explain each observation given the other, i.e. that o 2 follows from o 1 + h and o 1 from h + o 2 .", "To filter hypotheses that only explain one of the two observations, we remove any that make either observation less probable than the empty hypothesis, imposing: LM ( o 1 | h + o 2 ) > LM ( o 1 | o 2 ) LM ( o 2 | o 1 + h ) > LM ( o 2 | o 1 ) 2.7 Intuitions and Theory Here we discuss the theoretical intuition for REFLECTIVEDECODING , as a way to sample generations that share contextual fit with a source text, deriving the sampling function of Equation", "2. We start by considering how to relate the meaning of two texts, generation s and input s input .", "We follow a distributional intuition (Firth, 1957), that meaning can be understood through the contexts in which text appears.", "Many distributional approaches learn contentful neural representations by predicting context given input text (Mikolov et al., 2013; Kiros et al., 2015), then compare these representations to establish semantic similarity.", "We can, instead, compare contexts directlyjudging the difference in meaning between texts s input and s by their divergence: DKL ( LM ( c | s input ) , LM ( c | s )) (6) We use LM to interchangeably denote the theoretical left-to-right distribution of text, and the LM estimating it.", "Thus, LM ( c | s ) is the distribution over right contexts c given sentence s, and Equation 6 can be understood as the contextual information difference we expect s to have from s input .", "Note, we could similarly use left-hand context and LM and do so in practice.", "We use finite-sample cross entropy as an effective empirical proxy for DKL : H ( LM ( c | s input ) , LM ( c | s )) = 1 N (cid:88) c i LM ( c | s input ) log LM ( c i | s ) (7) Where c i LM ( c | s input ) indicates sampling contexts for s input from LM.", "Intuitively, we want to minimize this score when generating s : an optimal output has a similar meaning to s input and so fills approximately the same contextual hole, minimizing the value of this contextual distance.", "In this form, H compares 2 complete texts s and s input but we are trying to generate s for which the divergence from s input is low.", "We flip the role of text and context 3 to define a function from which we can sample s : RD ( s j | , s j +1: n ) = (cid:81) i LM ( s j | s j +1: n + c i ) w i (cid:80) t V (cid:81) i LM ( t | s j +1: n + c i ) w i (8) (equivalent to Equation 2, derived in A.1) s j is the j th token in s (sampled right-to-left from n to 0), and V is the vocabulary.", "Weights w i are learned by maximizing the probability of s input .", "Equation 8, estimates the probability of predicting s input and s from a finite set of contexts c i generated from s input .", "This approximately minimizes Equation 6, as being generated by the same weighted ensemble of contexts strongly correlates with generating the same contexts in the same proportions, i.e. low divergence, due to the sparsity of language.", "We can sample s with low contextual distance from s input using RD.", "Further, we can use left context to construct RD by simply reversing the directions of the LMs used.", "Task: Following past work, we test our paraphrasing method (2.5) on the Quora question pair dataset.", "We hold out 1000 examples for testing, with the rest for training and validation (used by supervised baselines), disallowing overlap with the test set.", "We test a subset of models (compatible unsupervised models, MT) on the Twitter URL corpus (Lan et al., 2017), using 1000 examples from the canonical test split.", "Metrics: Following past work, we include automatic metrics BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2014), and TER p (Snover et al., 2009).", "These measure agreement with references, but high reference/input overlap means copying is rewarded (Mao and Lee, 2019); indeed, copying source sentences as-is wins on these metrics (Table 1), meaning both BLEU and METEOR can be easily gamed.", "3 Context is a symmetric relation: a given text serves as the one-sided context of its own context.", "Past work has emphasized the important challenge of generating novel paraphrases (Liu et al., 2010; Chen and Dolan, 2011) We address this in 3 ways.", "First, we explicitly quantify a simple notion of novelty: Novelty ( s ) = 100 BLEU ( s, s input ) (9) to quantify the novelty-quality trade-off.", "Second, we include the SARI metric (Xu et al., 2016) which explicitly balances novelty from input with reference overlap.", "Third, we quantify an overall human quality metric accounting for this.", "We have humans evaluate fluency, consistency, and novelty on Amazon Mechanical Turk.", "The overall score (Human in Table 1) is the rate examples meet thresholds for all 3: fluent enough to understand, with at most minor differences in meaning and at least minor differences in wording.", "On quora, we test 200 examples, with agreement (Fleiss' Fleiss, 1971) of 0.40 (fluency), 0.54 (consistency), 0.77 (novelty) and 0.48 (over-all) i.e. moderate to substantial agreement (Landis and Koch, 1977).", "On the Twitter corpus, we use 100 examples with agreement of 0.39, 0.42, 0.54, and 0.36, indicating fair to moderate agreement.", "On both we have 3 raters per example.", "See C.2 for more.", "Baselines: Parameters for REFLECTIVEDECODING are given in B.4.", "We mainly compare against 3 unsupervised baselines: Controlled Sentence Generation by Metropolis Hastings (CGMH from Miao et al. 2019), Simulated Annealing (UPSA from Liu et al. 2019) and the residual VQ-VAE of Roy and Grangier (2019a) (R-VQVAE).", "This is a cross-section of recent approaches (VAE, editing).", "We also compare against a machine-translation approach (see Sec 6), pivoting through German using Transformer (Vaswani et al., 2017) models trained on WMT19 data (Barrault et al., 2019).", "MT is included in a separate section in our results as it uses supervised bilingual data (Table 1).", "We include supervised baselines: the pointer generator trained by imitation learning (PG-IL) as in Du and Ji (2019), the diversity-promoting DiPS model (Kumar et al., 2019), and a finetuned BART model (Lewis et al., 2019), which uses a more complex pretraining method than our LMs.", "Note that DiPS generates multiple diverse paraphrases so we pick one at random.", "CGMH and REFLECTIVEDECODING both return multiple sampled, ranked paraphrases.", "We can easily control for Novelty by taking the highest-ranked output that meets a Novelty threshold.", "For both, we have a version with no threshold ( T op ), and with thresholds such that average Novelty is 30 and 45 .", "Novelty cutoffs do not depend on the reference, only the source, and are equivalent to selecting with BLEU-ori ( Novelty is 100 BLEU-ori) by Miao et al. (2019) or Bao et al. (2019).", "Task: The Abductive natural language generation task ( NLG) presented in Bhagavatula et al. (2020) requires generating a hypothesis that fits", "We apply REFLECTIVEDECODING to this problem as outlined in 2.6, using the given data splits.", "Metrics: For human evaluation, over 200 examples we ask 3 raters on Amazon Mechanical Turk about coherence between h and o 1 , o 2 , o 1 + o 2 , and overall quality on 4-value likert scales.", "We found Fleiss' kappa (Fleiss, 1971) of 0.32, 0.40, 0.41, and 0.41 respectively, indicating fair to moderate agreement (Landis and Koch, 1977).", "Baselines: Parameters for REFLECTIVEDECODING are given in B.4.", "We include baselines from the original work: different supervised variants of GPT-2 large with access to the observations, and optionally commonsense embeddings or generations from COMET (Bosselut et al., 2019).", "We include unsupervised baselines of GPT-2 conditioned on o 1 + o 2 directly, the gradient-based DeLorean model of Qin et al. (2020), and ILM infilling model of Donahue et al. (2020), representing recent unsupervised methods.", "Paraphrasing First, the Quora dataset: On automatic metrics from past works (BLEU, METEOR, TERP ) our lowestNovelty model setting (RD Top ) achieves the highest unsupervised scores, and highest overall on BLEU.", "Other high scoring rows (Source, PG-IL) are similarly lowNovelty .", "The SARI metric explicitly balances Novelty with similarity to reference.", "On SARI we see such low-Novelty models perform worse.", "The best overall model on SARI is our mediumNovelty setting (RD 30 ) which outperforms MT and supervised models.", "Our human evaluation measures what fraction of outputs are found to be fluent, consistent, and novel.", "As with SARI, both our mid and highNovelty models perform quite well, again with the medium-Novelty setting outperforming all baselines.", "As further validation for SARI as a proxy for human, they share the same top-5 models.", "Results on the Twitter URL corpus largely support those on Quora.", "REFLECTIVEDECODING achieves the best unsupervised scores on novelty-aware metrics (Table 2), with the best overall SARI, even outperforming reference on the human metric, although MT achieves the highest overall.", "In sum, REFLECTIVEDECODING is able to compete on previously used quality metrics favoring lowNovelty , but can produce more varied outputs preferred by humans.", "RD 45 is among the best models by SARI and Human on Quora despite exceeding the novelty of even the reference.", "NLG Results on NLG (Table 3) present a strong case that REFLECTIVEDECODING can effectively use bidirectional context.", "Strong hypotheses use information from both initial the observation o 1 and the future observation o 2 .", "Humans ranked the ability of REFLECTIVEDECODING to capture this 42.4, about 17 points above the next-best unsupervised baseline and only 15 points below the best supervised method tested.", "We see similar results for overall evaluation.", "A likely factor in this is the (comparatively) high degree of coherence between h and o 2 by REFLECTIVEDECODING .", "Where other methods seem to pay more attention to observation o 1 (the o 2 column generally has much lower values), REFLECTIVEDECODING has comparably high coherence with left-hand ( o 1 ) and right-hand ( o 2 ) contexts.", "We also include example generations in Figure 2 to demonstrate the ability of REFLECTIVEDECODING to combine o 1 and o 2 .", "For example, h = He put her on the swing, and while she was on the swing, she fell off and was lying on the ground.", "incorporates information from both observations.", "Specifically, it takes into account the swing that Ray is building for his daughter which is only mentioned in o 1 , and hypothesizes about a potential injury due to Ray checking on his daughter in o 2 .", "See appendix for more generations.", "Overall, the strong performance of REFLECTIVEDECODING on NLG shows that unsupervised generation with context ensemble applies to infilling in addition to paraphrasing.", "REFLECTIVEDECODING Out-of-the-Box A major advantage to applying REFLECTIVEDECODING is ease-of-use: armed with our pretrained language models, practitioners can immediately begin generating.", "With general pretrained models and underlying principles that are domain-agnostic, REFLECTIVEDECODING works across a broad range of text styleno finetuning requiredmaking exploration and adaptation simple.", "Multiple rounds of generation mean REFLECTIVEDECODING may run slower than other methods at inference time 4 , but it avoids training time.", "There are clearly settings that favor supervised learning (narrow, known domain with abundant training data), but REFLECTIVEDECODING is a good option to begin generating and exploring immediately with high quality generation.", "A useful abstraction for understanding REFLECTIVEDECODING for current applications is prompting, i.e., writing a prefix to implicitly or explicitly describe a task for a pretrained model.", "REFLECTIVEDECODING generates natural contexts that the desired generation would appear in.", "This breaks from other methods of automatic prompting, which often forego natural prompts (Shin et al., 2020; Reynolds and McDonell, 2021), even making them continuous (Li and Liang, 2021; Ham-bardzumyan et al., 2021; Lester et al., 2021; Qin and Eisner, 2021).", "REFLECTIVEDECODING also notably creates a set of prompts (contexts) for each example, where other methods attempt to learn an overall task prompt.", "Still, all of these are connected by the popular intuition that useful behavior in pretrained models can be induced through contextual input.", "Future Applications REFLECTIVEDECODING can extend beyond our experiments here, however.", "A simple example is in-context paraphrasing, i.e. writing a paraphrase that fits the true context that the original sentence appears in.", "Most existing paraphrasing methods consider only out-of-context sentences, and would require significant changes to consider context as a constraint; for REFLECTIVEDECODING we can simply combine true and generated contexts without with the same algorithm.", "poten-4 Depending on parameters we found most baselines took multiple seconds per example vs. 10s of seconds for REFLECTIVEDECODING on a multi-gpu machine.", "tial for future work.", "Pretrained LMs capture rich information about text spans, but accessing it without fine-tuning is nontrivial; within the model it is an uninterpretable mass of parameters and activation weights.", "Our work observes that unidirectional LMs are only capturing this information to predict adjacent contextthis is the sole learning signalso all of this information is expressed in the model's context prediction.", "Thus, we capture some of this rich information to represent spans, by capturing a finite-sample version of this full predictive distribution in generated contexts.", "In REFLECTIVEDECODING specifically, we use this form of representation to generate back into the source spanparaphrasing or infillingbut the notion can be applied much more generally.", "In translation for instance, we might first generate contexts for the source sentence that represent its meaning, noisily translate these contexts, then impose that any translations for the source fit the same contexts under a translation-language LM.", "Constraining translations in this way can add robustness to existing systems by anchoring translations to informative contexts.", "Beyond explicit generation even, we might use a very large LM like GPT-3 to define a strong scoring function or metric as in Equation 7, first generating contexts for some target sentence, then scoring candidates by how well they generate these same contexts.", "As in our work, such a score could indicate how well the option fills the same contextual role as the target, harnessing the strong reasoning of whatever model is used.", "Distributional Intuitions A key aspect of REFLECTIVEDECODING is using a distributional intuition to represent the meaning of a text through many contexts.", "Kiros et al. (2015); Miao et al. (2019) quantify semantic relationships and Lin and Pantel (2001) identify paraphrastic relationships under similar intuitions.", "A major point of difference between past work and ours is that we sample explicit contexts, allowing unsupervised generation back from these contexts, while past work typically learns a neural representation based on contexts and conditions on this vector-encoded representation.", "Unsupervised Paraphrasing Some approaches train neural variational auto-encoders unsupervised to represent source sentences, then decodes from these representations to paraphrase (Roy o 1 o 2 o 1 + o 2 all Human 86.3 89.1 85.1 84.4 Supervised COMeT Emb +GPT2 69.3 60.1 56.4 56.3 COMeT Txt +GPT2 68.9 54.8 51.9 50.6 O 1 -O 2 -Only 69.2 57.7 54.3 53.8 Unsupervised GPT2-Fixed 20.6 13.9 10.8 10.3 DeLorean 48.7 24.6 23.6 22.5 ILM 45.9 27.3 25.3 25.0 Reflective Decoding 53.4 51.7 42.4 41.9 Table 3: Model performance on NLG.", "and Grangier, 2019b; Bao et al., 2019).", "This requires training specialized representations, whereas REFLECTIVEDECODING applies general-purpose LMs.", "We compare to Roy and Grangier (2019b).", "Paraphrasing by editing the input (Miao et al., 2019; Liu et al., 2019) has shown promise.", "Like REFLECTIVEDECODING , these approaches can be applied without training specialized models, but are necessarily limited by edit-paths and local minima, as edits are often restricted to single-word replacement, insertion, and deletion.", "Generated paraphrases must follow a continuous local edit path, while REFLECTIVEDECODING can generate new sentences from scratch.", "REFLECTIVEDECODING and MT-based paraphrasing both pivot through an alternative textual form to paraphrase (context and translation, re-spectively).", "But MT paraphrasing systems cycle-translate through a pivot language (Federmann et al., 2019; Wieting and Gimpel, 2018), which requires supervised bilingual translation data, with an implicit notion of interlingual paraphrasing.", "Novelty in Paraphrasing Mao and Lee (2019) observe that paraphrases close to the source often win on automatic quality metrics.", "However, dissimilarity from the source correlates with human notions of paraphrasing (Liu et al., 2010).", "Kumar et al. (2019) increase novelty through their diversity-promoting sampling method.", "Alternative metrics that consider novelty alongside quality have been proposed (Sun and Zhou, 2012; Feder-mann et al., 2019).", "The SARI metric (Xu et al., 2016), included here, combines these notions.", "Abductive Text Infilling NLG (Bhagavatula et al., 2020) is a text infilling task that specifically measures the ability of models to explain bidirectional context (observations o 1 , o 2 ) with a hypothesis that fits between them.", "This naturally fits REFLECTIVEDECODING , which fills in contextual gaps.", "Recent work has directly addressed this task (Qin et al., 2020) while the infilling literature is also quite applicable (Donahue et al., 2020).", "We compare to both of these methods on abductive infilling, showing superior results.", "We present REFLECTIVEDECODING , a novel unsupervised text generation method for tasks that do not fit the text continuation paradigm.", "It uses just two pretrained Language Models to generate contexts that capture aspects of input text, generating back into the input from there.", "It significantly outperforms unsupervised baselines in quality and novelty for paraphrasing.", "Further, in abductive natural language generation it outperforms unsupervised baselines by a significant margin and halves the gap with supervised models.", "REFLECTIVEDECODING uses the concept of representing meaning with generated contexts, offering new possibilities for unsupervised conditional text generation.", "We thank anonymous reviewers for many helpful comments.", "This research is supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) (funding reference number 401233309), DARPA CwC through ARO (W911NF15-1-0543), DARPA MCS program through NIWC Pacific (N66001-19-2-4031), the Allen Institute for AI, and a gift from Intel Labs Cognitive Computing Research.", "In order to complete our human evaluation we used Amazon Mechanical Turk.", "We estimated the range of times we expected our task to take, and made sure that at minimum workers would be paid a wage of $15.00 per hour if they were solely completing our task.", "As part of this effort, we plan to release our code and model.", "Our forward and backward language models are the same size as the publicly available GPT-2 (Radford et al., 2019).", "Training time/energy was likely significantly smaller than the original release; existing code and hyperparameters were available, and we use a smaller dataset.", "Further, there is no publicly available backward GPT-2 model that we are aware of, so releasing a pair of forward and backward models that were trained on the same data allows for proper comparisons about left-to-right vs. right-to-left processing of English text.", "We estimate that the potential dangers of releasing this from a malicious generation perspective are low.", "Our forward model is similar to already released GPT-2 models.", "While the backward model adds new generation potential and scientific novelty, it is unlikely to compare to GPT-3 (Brown et al., 2020) which many hobbyists and private companies now have access to.", "We believe that releasing a pair of forward and backward models will be more useful to researchers who wish to study the symmetries and asymmetries of the linguistic distribution." ]
[ "abstain", "abstain", "objective", "abstain", "objective", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "result", "method", "method", "abstain", "abstain", "abstain", "method", "other", "other", "method", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method" ]
[ "Retrieval is a core component for open-domain NLP tasks.", "In open-domain tasks, multiple entities can share a name, making disambiguation an inherent yet under-explored problem.", "We propose an evaluation benchmark for assessing the entity disambiguation capabilities of these retrievers, which we call Amb iguous E ntity R etrieval (AmbER) sets .", "We define an AmbER set as a collection of entities that share a name along with queries about those entities.", "By covering the set of entities for polysemous names, AmbER sets act as a challenging test of entity disambiguation.", "We create AmbER sets for three popular open-domain tasks: fact checking, slot filling, and question answering, and evaluate a diverse set of retrievers.", "We find that the retrievers exhibit popularity bias, sig-nificantly under-performing on rarer entities that share a name, e.g., they are twice as likely to retrieve erroneous documents on queries for the less popular entity under the same name.", "These experiments on AmbER sets show their utility as an evaluation tool and highlight the weaknesses of popular retrieval systems.", "1 1 Introduction Substantial progress in NLP has been made on closed tasks, where queries are paired with relevant documents (Rajpurkar et al., 2016; Dua et al., 2019).", "However, there is growing interest in open-domain tasks, where relevant documents need to be retrieved from a knowledge source before an NLP system can perform reasoning and produce an answer (Chen et al., 2017; Petroni et al., 2021).", "The open-domain setting better reflects real-world usage for tasks where relevant information is generally not provided ( e.g., fact checking).", "Work started during an internship at Apple.", "1 The AmbER sets used in this paper and the code to generate them are available at https://github.com/ anthonywchen/AmbER-Sets .", "Because success hinges on finding relevant documents, open-domain progress has been closely tied to improvements in retrieval systems 2 (Lee et al., 2019; Karpukhin et al., 2020; Lewis et al., 2020b).", "A crucial challenge when interacting with a large knowledge source ( e.g. , Wikipedia) is entity ambiguity, the phenomenon where a single name can map to multiple entities.", "Resolving this ambiguity is referred to as entity disambiguation and is an important step for effective retrieval.", "For example, given the query What musical instrument does Abe Lincoln play? , documents about the musician should rank higher than other entities with the same name (Figure 1).", "Although entity disambiguation has been extensively studied in entity linking (Hof-fart et al., 2011; Rao et al., 2013; Sevgili et al., 2 For example, replacing the BM25 retriever with DPR on Natural Questions increases exact match by 15 points. 2020) and search (Balog et al., 2010, 2011), in the context of open-domain NLP, it is unclear how good retrieval systems are when faced with queries with ambiguous entities.", "Evaluating entity ambiguity is challenging because the popularity of entities follows a long-tail (Figure 2) and rare entities are seldom covered in naturally-occurring datasets.", "In this paper we introduce AmbER sets, a benchmark for evaluating the entity disambiguation capabilities of retrievers across multiple NLP tasks.", "Each AmbER set is a collection of Wikidata entities that share a name, and their corresponding queries for specific NLP tasks.", "For each set, we define the head entity as the most popular entity and tail entities as the less popular ones.", "By creating queries for multiple entities that share a name, AmbER sets provide an accurate test of entity disambiguation capabilities of retrievers and help assess the role of entity popularity in disambiguation.", "We show examples of AmbER sets for the question answering task in Table", "1. We automatically create AmbER sets by mining the Wikidata knowledge graph (Vrandecic and Krotzsch, 2014) for relevant names and entities, and leveraging task-specific templates to generate inputs for three tasks: fact checking, slot filling, and question answering (Figure 3).", "In total, our AmbER sets contain 80k task-specific queries which we align to the Wikipedia snapshot from KILT (Petroni et al., 2021).", "We use AmbER sets to conduct a systematic study of various retrieval systems that operate under different principles, such as token overlap and dense embedding similarity.", "Retrievers perform very differently on AmbER sets in terms of ab-solute retrieval numbers, with Bootleg (Orr et al., 2020), an entity-linking-based retriever, performing best.", "Despite these differences, all retrievers exhibit a large degree of popularity bias, under-performing on inputs concerning tail entities.", "TF-IDF, a token-based retriever, performs about four times worse on tail entity inputs compared to head entity inputs.", "Even with Bootleg, the best performing retriever, performance on tail entities is still 1.5 times lower than on head entities.", "Our results on AmbER sets demonstrate that there is significant work to be done on making retrievers robust in handling entity disambiguation.", "Retrieving relevant documents from large knowledge sources such as Wikipedia is an important", "first step in the open-domain pipeline.", "An inherent problem in working with such sources is entity disambiguation: resolving a name (mention) to an entity in the knowledge source.", "Entity disambiguation can be challenging because many entities share a name, and the popularity of entities follows a long-tail distribution (Figure 2).", "Despite the importance of entity disambiguation, it remains an understudied problem for open-domain NLP.", "We introduce AmbER sets for evaluating entity disambiguation capabilities of retrievers and analyze the role of entity popularity in disambiguation.", "We first provide an intuition for an AmbER set before concretely defining one.", "Consider two entities, a president and a musician, both of which have the name Abe Lincoln (Figure 1).", "Now, consider the query Which battle did Abe Lincoln fight in? and assume a retriever correctly returns the article about the president for this query.", "Simply because the correct document was retrieved does not mean a retriever has the ability to disambiguate between the president and the musician, as the president is much more popular.", "We should only be confident in its ability to disambiguate entities if we also pose a query about the less popular musician and the retriever again returns the correct document (as opposed to the document about the president).", "Based on this intuition, we define an AmbER set as a collection of queries that satisfy the following: Criteria 1: Polysemous Name : The queries in an AmbER set are all about entities that share a common name ( e.g. , Abe Lincoln).", "Criteria 2: Disparity in Popularity : An AmbER set contains queries about both the most popular entity for a name (the head entity), e.g. , the president, and the less popular entities (the tail entities), e.g. , the musician.", "Criteria 3: Resolvable Ambiguity : The content of the query should be sufficient to resolve to the correct entity.", "The query Which battle did Abe Lincoln fight in? satisfies this criteria, because there is only one Abe Lincoln that fought in a war, while Where was Abe Lincoln born? does not since it applies to all Abe Lincolns.", "We provide examples of AmbER sets for the task of question answering in Table", "1. 2.2 Open-Domain Tasks In this work, we create AmbER sets for three tasks: fact checking, slot filling, and question answering (Table 2).", "We consider these three tasks for three reasons.", "First, these three set of tasks are diverse in nature.", "In this work, slot filling is a generation task, question answering is a span selection task, and fact checking is a classification task.", "Second, the training sets available for each task are quite disparate.", "The largest fact checking training set, FEVER (Thorne et al., 2018), has 80k instances, while the slot filling dataset, T-REx (Elsahar et al., 2018), has over 2 million instances.", "The final reason we study these three tasks is that their inputs are short and easy to create.", "While AmbER sets can be manually created, doing so can be time-consuming, requiring a human to manually scour a knowledge base for polysemous", "names and related entities before manually writing queries for those entities.", "Instead, we present a pipeline for automatically creating AmbER sets using the Wikidata knowledge graph (Vrandecic and Krotzsch, 2014).", "In this section, we describe two different collections of AmbER sets, and discuss our automatic pipeline for creating AmbER sets.", "A natural question is How do retrievers handle entity ambiguity when two entities have the same entity type as opposed when they have different types?.", "To answer this question, we create two collections of AmbER sets.", "The first is AmbER-H , a collection of AmbER sets where all entities are humans.", "The choice to restrict AmbERH to humans is motivated by the fact that humans have properties that help distinguish themselves from other humans, generally based on occupation.", "The second is AmbERN , a collection of AmbER sets where all entities contained are non-humans, and disambiguation of a name is between non-human entities with different entity types.", "This is because a non-human entity, like a movie, does not generally have a single distinguishing property to distinguish from other movies.", "This makes it natural to compare non-human entities to other non-human entities with different types.", "We specify the entity types in each collection in Table", "3. Davy Jones Name David Bowie* Popularity: 4.09 Wikidata Entities Davy Jones (racing driver) Popularity: 2.49 Davy Jones (baseball) Popularity: 1.93 Gender: Male Birthplace : Brixton Gender : Male Sport : Baseball Gender : Male Sport : Auto Racing Movement : New Wave Wikidata Properties Sports Team : Chicago White Sox Task Specific Inputs QA: Which movement is Davy Jones associated with?", "We now describe a pipeline to automatically create AmbER sets for three tasks: fact checking, slot filling, and question answering.", "We provide a visualization of the pipeline in Figure", "3. Collecting Names and Entities We begin by collecting all entity aliases 3 in Wikidata.", "From these aliases, we filter for those that are shared by multiple Wikidata entities.", "Each entity in Wikidata is represented by a unique QID.", "The entities must have an entity type from Table 3 depending on the collection we are collecting AmbER sets for.", "Each alias and associated entities form the basis for an AmbER set.", "Within each set, we define the head and tail entities based on the number of Wikipedia page views for the month of October 2019.", "We filter out AmbER sets where the percentage gap in popularity between the head entity and the most popular tail entity is less than 10% to account for noise in the monthly page views.", "Collecting Distinguishing Properties We gather properties and associated values for each entity from Wikidata.", "We only retain properties that are in a specified list (Table 3), as they are useful for resolving ambiguity (Criteria 3) .", "We also filter a property if two entities within an AmbER set have that property, ensuring that the remaining properties can be used to disambiguate between entities with the same name.", "These properties are used to instantiate the queries.", "3 Aliases are all possible names for an entity.", "the knowledge source for AmbER sets for better reproducibility.", "Each Wikipedia document in KILT has an associated QID.", "For each entity, we find all Wikipedia documents with that associated QID.", "After this alignment, we apply a round of filtering on the tuples.", "For each tuple, we check that the value of the tuple is within the first 350 tokens of the aligned Wikipedia article.", "If not, we remove AmbERH AmbERN # AmbER Sets 2,093 5,237 Averages per AmbER Set ...# entities 2.98 2.42 ...# entities w/ properties 2.03 2.06 ...# properties 2.84 2.64 # Input Queries 23,768 55,216 ...Question Answering (QA) 5,942 13,804 ...Slot Filling (SF) 5,942 13,804 ...Fact checking (FC) 11,884 27,608 Table 4: Statistics of AmbER collections.", "Instantiating AmbER Instances Recall that our goal was to create AmbER sets for three tasks: fact checking, slot filling, and question answering.", "We are able to create queries for all three tasks simultaneously using the collected Wikidata tuples.", "For question answering and fact checking, we use templates based on properties to instantiate inputs.", "Three of the authors wrote a template each for each property for the two tasks.", "Duplicate templates are removed, resulting in an average of 3 question answering templates per property and 2.7 fact checking templates per property.", "See Appendix B for the complete list of templates.", "For slot filling, we create a single input from each Wikidata tuple by concatenating the AmbER set name with the property name, and using the value of the tuple as the answer.", "For question answering, we also create a single input for each tuple by filling in the template with the AmbER set name and using the value of the tuple as the answer.", "For fact checking, we create two inputs for each tuple, one claim that is true using the tuple value and one claim that is false.", "The false claim is created by finding the most popular value for the tuple property that does not match the tuple value 5 .", "We provide statistics for AmbER sets in Table", "4. On average, each AmbER set has about three entities that share the same name.", "Of these three entities, on average, only two have properties after filtering.", "In total, our AmbER sets contain about 80k task-specific input queries.", "4 This reduces the number of tuples for AmbERH from 17,079 to 5,942 and for AmbERN from 22,219 to 13,804.", "Since our pipeline is automated and relies on Wikipedia and Wikidata, there are a few limitations worth noting.", "AmbER sets will be affected by incompleteness of the knowledge source, sometimes resulting ambiguous queries if a property is missing from Wikidata, but answerable from Wikipedia text.", "For this reason, we only select a few properties for each type (Table 3).", "Second, even though we author multiple templates for each property, the reliance on these templates limits the syntactic diversity in the queries (not a critical concern, since we are only evaluating existing models).", "Also, we use Wikipedia page views as a proxy for real-world popularity of entities.", "Defining popularity in this way may be problematic, as page views for an entity can fluctuate, and may make our pipeline difficult to generalize to other knowledge sources, where this information may not be available.", "Several design choices in creating AmbER sets are worth further investigation.", "We limit AmbER sets to a pre-specified list of entity types and properties to ensure that entities in an AmbER set are distinguishable.", "This precludes other properties that may be useful in distinguishing entities, reducing the diversity in AmbER sets.", "Another design choice is we allow any alias in Wikidata to form an AmbER sets, however, not all aliases are canonical ways to refer to the entity.", "For instance, Shaquille O'Neal has the unusual alias The Big Cactus, potentially leading to a somewhat unrealistic query What sport did The Big Cactus play? .", "We plan to revisit the these design choices in future work.", "Retrieval Systems The primary focus of this work is to evaluate entity ambiguity of retrieval systems.", "We consider four retrievers based on different retrieval paradigms.", "The first three are TF-IDF, a token-based retriever using sparse em-beddings, DPR (Karpukhin et al., 2020), a dense embedding based retriever, and BLINK (Wu et al., 2020), a linker-based retriever which ranks documents based on input entities.", "These three retrievers have been thoroughly evaluated on a number of open-domain tasks in Petroni et al. (2021) with no obvious winner across tasks.", "Encouraged by the disambiguation success on rare entities by Orr et al. (2020), we also evaluate a retriever based on Bootleg, another entity linker.", "We provide additional details about these retrievers in Appendix D. Collection Retriever Fact Checking (FC) Slot Filling (SF) Question Answering (QA) All Head Tail All Head Tail All Head Tail AmbERHTF-IDF 17.3 28.5 8.2 0.0 18.8 31.9 8.1 0.0 16.7 28.2 7.3 0.1 DPR 18.1 23.9 13.3 0.1 8.0 11.6 5.1 0.3 13.1 19.6 7.9 1.1 BLINK 55.9 64.4 49.0 5.6 38.2 57.0 22.9 11.5 31.7 40.5 24.6 6.6 Bootleg 34.8 43.0 28.2 0.7 56.5 63.9 50.6 25.3 67.2 77.1 59.1 36.1 AmbERNTF-IDF 9.4 13.6 4.9 0.0 13.4 21.0 5.2 0.2 13.9 21.7 5.4 0.3 DPR 36.9 48.0 24.8 4.4 29.9 40.9 18.0 6.0 36.2 49.2 22.2 9.3 BLINK 11.7 13.9 9.4 0.0 5.7 7.3 3.9 0.7 35.2 44.7 24.9 10.1 Bootleg 3.5 4.6 2.4 0.0 52.3 61.3 42.5 22.4 59.8 69.5 49.3 29.0 Table 5: Top-1 retrieval results on each collection of AmbER sets.", "Downstream Models The dominant approach to open-domain tasks is a two-stage process where a retriever first finds relevant documents, followed by a downstream model that processes these documents to produce an answer.", "We evaluate the end-to-end performance on AmbER sets by training downstream NLP models on our tasks of interest.", "For fact checking, we fine-tune a BERT classifier (Devlin et al., 2019) on FEVER (Thorne et al., 2018).", "For question answering, we fine-tune a RoBERTa model (Liu et al., 2019) on Natural Questions (Kwiatkowski et al., 2019).", "For slot filling, a generation task, we fine-tune a BART model (Lewis et al., 2020a) on T-Rex (Elsahar et al., 2018).", "We provide example training instances in Table 2 and additional details on the models in Appendix E. We use the AllenNLP and HuggingFace Transformers library to finetune our downstream models (Gardner et al., 2018; Wolf et al., 2020).", "In this section, we evaluate existing open-domain NLP pipelines using AmbER sets.", "We also conduct Figure 4: Popularity Gap vs Retrieval Gap.", "Top Document Retrieval We report retrieval performance in Table 5 in terms of retriever ac-curacy@1 (the % of instances where the first retrieved document is the gold document).", "For each task, we report values on the entire AmbER set (All), as well as instances corresponding only to Head entities or to Tail entities.", "We also report a metric we call all correct ( ), the fraction Task System Results All Head Tail HFC BERT (Oracle) 77.7 73.6 80.3 BERT + BLINK 59.8 60.1 57.7 SF BART (Oracle) 83.9 85.0 83.5 BART + BLINK 34.4 38.2 32.6 QA BERT (Oracle) 71.4 77.7 83.0 BERT + BLINK 27.5 33.8 22.3 NFC BERT (Oracle) 66.6 63.9 69.5 BERT + DPR 60.9 61.4 60.4 SF BART (Oracle) 82.1 80.1 84.3 BART + DPR 18.6 18.6 18.6 QA BERT (Oracle) 83.5 85.1 81.8 BERT + DPR 26.0 31.3 20.4 Table 7: End-to-end performance on AmbER sets .", "of AmbER sets in which all queries had the correct document retrieved.", "All retrievers do better on head entities compared to tail entities.", "Since BLINK, Bootleg, and DPR are initialized using pre-trained language models, they may have a predisposition towards being biased to more popular entities.", "However, we find TF-IDF also does better on head entities, perhaps because more popular entities have longer Wikipedia pages, possibly increasing term-frequency scores.", "Second, there are large discrepancies between a retriever's performance on different tasks for an AmbER collection.", "For instance, DPR does substantially worse on slot filling compared to its performance on question answering.", "This is surprising since queries for all tasks are created from the same set of Wikidata tuples.", "Finally, we find that retrievers are mostly incorrect on getting all the queries in a set correct, with some receiving a score of 0 on some tasks.", "Overall, we find that the Bootleg retriever on average does the best across tasks, however there is significant scope for improvement.", "Entity Confusion To explicitly evaluate whether retrievers get confused by entities in the same AmbER set, we compute entity confusion for retrievers defined as the percentage of queries where the retriever ranks a document for an incorrect entity from the same AmbER set over the gold document (Table 6).", "We find that across retrievers, tasks, and AmbER collections, entity confusion is twice as high for tail entity inputs.", "This result indicates that the popularity of an entity for a given name plays a significant role in retrieval performance.", "Effect of Popularity Gap Since the difference in popularity between the head and tail entities can vary considerably, these results obfuscate the effect of the size of the popularity gap.", "We explore how the gap in popularity between head and tail entities translates to the gaps in performance on their associated queries.", "For a head entity with popularity p h and a tail entity with popularity p t from the same AmbER set, we calculate popularity gap, p h p t p t , and bin associated head/tail inputs based on the gap 6 .", "For each bin, we calculate the difference in accuracy@1 between the head and tail entity queries.", "Results for QA AmbER sets (Figure 4) show that there is a strong correlation between the popularity gap and the difference in performance.", "End to End Results We evaluate end to end performance in several evaluation settings with all results provided in Table", "7. The metrics used are F1 for slot filling and question answering and accuracy for fact checking.", "In the oracle setting, we directly provide the downstream NLP model the gold document, and find that the gap between head entities and tail entities is fairly small.", "This suggests that in closed NLP settings, where the gold document is known, entity disambiguation is not a major concern.", "In the regular retrieval setting, we provide the model the top 20 documents as ranked by a retrieval system (BLINK and DPR), and find that retrievers still perform better on head entity queries (see Appendix A).", "The downstream systems that use retrieved documents display a noticeable gap in end-to-end performance between head and tail entity inputs.", "This is expected, as retrieval systems perform worse on tail entities.", "User Study AmbER sets are created in a largely automatic process, raising questions about data quality.", "To address these questions, we conduct a small user study on AmbER sets to evaluate whether the queries are resolvable by humans.", "We present a query from a QA AmbER set along with three documents for the entities from the same AmbER set, one of which is the gold document.", "We first ask the user to select the relevant document, then we ask the user to select an answer span from the selected document.", "In total, we asked 7 subjects to examine about 120 queries across AmbER-H and AmbERN , and computed their accuracy in 6 Bin width of 20%.", "Queries with a popularity gap higher than 100% are binned into the highest bin.", "selecting the correct document and answer (Table 8).", "We also compare retrievers for this task, i.e. select from 3 documents for the same queries, and find that humans perform very well on the document selection task compared to retrievers on both sets.", "We also compare the accuracy of answer selection, and see that the closed domain NLP model (fine-tuned BERT) is as almost accurate as humans on the same set of queries 7 .", "This further confirms that closed NLP models are not the source of bias towards head entities, but the retrievers are.", "Entity Ambiguity As previously mentioned, entity ambiguity is when a single name can match multiple entities in a knowledge source.", "Entity ambiguity has been most studied in the context of entity linking (Rao et al., 2013).", "To improve disambiguation, entity linkers have included auxiliary information such as entity types (Onoe and Durrett, 2020) and entity descriptions (Logeswaran et al., 2019).", "A recent thread of work aims to study how language models recall and leverage information about names and entities.", "Prabhakaran et al. (2019) shows that names can have a measurable effect on the prediction of sentiment analysis systems.", "Shwartz et al. (2020) demonstrates that pre-trained language models implicitly resolve entity ambiguity by grounding names to entities based on the pretraining corpus.", "The problem of entity ambiguity also appears implicitly in entity-centric tasks such as determining the semantic relatedness between entities (Hoffart et al., 2012) and entity-oriented 7 The relatively low answer score is due to artifacts in using EM for QA evaluation, and is consistent with human performance on span selection (Rajpurkar et al., 2016)).", "Popularity Bias System's that perform worse on the long-tail suffer from what is known as popularity bias.", "This problem has been studied extensively in the recommendation systems literature, where recommendation systems are known to often ignore the long-tail of products and instead recommend very popular items (Abdollahpouri et al., 2017; Chen et al., 2020).", "This has the effect of unfairly hurting users who would prefer these less-popular items (Abdollahpouri et al., 2019; Ciampaglia et al., 2018).", "We explore popularity bias from the angle of retrieval as opposed to recommendation, and find popularity bias exists in retrieval systems.", "Open-Domain Ambiguity Ambiguity is an inherent problem when it comes to open-domain reasoning.", "Min et al. (2020) showed that half of instances sampled from Natural Questions are ambiguous, with multiple correct answers.", "AmbER sets are similar in that the ambiguity is in terms of the entity in the query, however, in contrast to Natural Questions, AmbER set inputs have been constructed such that the ambiguity is resolvable.", "Challenge Sets There have been many evaluation sets specifically designed to assess a model's ability to handle a specific phenomenon (Naik et al., 2018; Zhao et al., 2018; McCoy et al., 2019; Warstadt et al., 2020; Richardson et al., 2020; Jeretic et al., 2020; Ribeiro et al., 2019).", "Some of these challenge sets, similar to AmbER sets, use templates to generate a large amount of evaluation data quickly (Richardson et al., 2020; McCoy et al., 2019; Ribeiro et al., 2020).", "AmbER sets can be viewed as a challenge set for assessing open-domain systems' ability to handle entity ambiguity.", "Entity ambiguity is an inherent problem in retrieval, as many entities can share a name.", "For evaluating disambiguation capabilities of retrievers, we introduce AmbER sets; an AmbER set is a collection of task-specific queries about entities that share a name, but the queries have sufficient content to resolve the correct entity.", "We create a broad range of AmbER sets, covering many entity types, with input queries for three open-domain NLP tasks: fact checking, slot filling, and question answering.", "Our experiments demonstrate the struggles of current retrievers in handling entity ambiguity.", "In particular, we find that the popularity of an entity in relation to other entities that share a name plays a significant role during disambiguation.", "For instance, we find that all tested retrievers are about twice as likely to retrieve erroneous documents when dealing with less popular entities than the most popular entity with the same name.", "Future goals include improving entity disambiguation capabilities of retrievers, perhaps more directly incorporating ideas from entity linking and coreference resolution.", "The AmbER sets and the code for the generation pipeline is available at https: //github.com/anthonywchen/AmbER-Sets .", "We would like to thank Jo Daiber, Michael Tu, Russ Webb, Matt Gardner, Robert Logan, Sherry Tongshuang Wu, and the anonymous reviewers for providing valuable feedback for our work.", "This work is funded in part by the DARPA MCS program under Contract No.", "N660011924033 with the United States Office Of Naval Research." ]
[ "abstain", "abstain", "objective", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "abstain", "result", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "method", "method", "objective", "result", "result", "abstain", "other", "other", "other", "other" ]
[ "A number of psycholinguistic studies have factorially manipulated words' contextual pre-dictabilities and corpus frequencies and shown separable effects of each on measures of human sentence processing, a pattern which has been used to support distinct mechanisms underlying prediction on the one hand and lexical retrieval on the other.", "This paper examines the generalizability of this finding to more realistic conditions of sentence processing by studying effects of frequency and predictability in three large-scale naturalistic reading corpora.", "Results show significant effects of word frequency and predictability in isolation but no effect of frequency over and above predictability, and thus do not provide evidence of distinct mechanisms.", "The non-replication of separable effects in a naturalistic setting raises doubts about the existence of such a distinction in everyday sentence comprehension.", "Instead, these results are consistent with previous claims that apparent effects of frequency are underlyingly effects of predictability.", "Are there distinct effects of a word's frequency versus predictability in human sentence comprehension?", "Recent evidence implicates prediction as a major organizing principle in cognition (Bubic et al., 2010; Singer et al., 2018; Keller and Mrsic-Flogel, 2018), and psycholinguists have long studied the role of prediction in human sentence processing and its relation to other comprehension mechanisms (Marslen-Wilson, 1975; Kutas and Hillyard, 1984; MacDonald et al., 1994; Tanen-haus et al., 1995; Hale, 2001; Norris, 2006; Levy, 2008; Frank and Bod, 2011).", "Some prominent theories of word recognition claim that ease of lexical access is modulated by the strength of a word's representation in memory, independently of contextual factors that guide prediction (Seiden-berg and McClelland, 1989; Coltheart et al., 2001; Harm and Seidenberg, 2004).", "Other theories hold that apparent effects of frequency are underlyingly effects of predictability (Norris, 2006; Levy, 2008; Rasmussen and Schuler, 2018).", "A number of studies using constructed stimuli that factorially manipulate word frequency and predictability have found separable additive effects of each, suggesting distinct influences on lexical processing (see Staub, 2015 for a re-view).", "This paper examines the generalizability of these findings to typical sentence comprehension by searching for separable effects of frequency and n -gram predictability using deconvolutional time series regression (DTSR) models (Shain and Schuler, 2018) fitted to three large naturalistic reading corpora: Natural Stories (Futrell et al., 2018), Dundee (Kennedy et al., 2003), and UCL (Frank et al., 2013).", "While results show evidence of both frequency and predictability effects in isolation, they show no effect of frequency over predictability and thus do not support the existence of separable effects.", "They are instead consistent with either (1) an account of apparent frequency effects as epiphenomena of predictive processing (Norris, 2006; Levy, 2008) or (2) a more circumscribed role for frequency effects in naturalistic reading than constructed experiments suggest.", "It has long been recognized that low-frequency words are harder to process (Inhoff and Rayner, 1986).", "For example, in a neutral context, the more frequent bottle should on average be processed more quickly than the less frequent kettle : (1)", "a. I have a bottle .", "b. I have a kettle .", "However, context can dramatically alter these patterns by changing words' predictability (Ehrlich and Rayner, 1981): (2)", "Some models of word recognition (Seidenberg and McClelland, 1989; Coltheart et al., 2001; Harm and Seidenberg, 2004) posit a context-independent lexical retrieval mechanism, distinct from any mechanisms for predictive coding, with processing cost proportional to the strength of a word's representation in memory (a function of lexical frequency).", "Such a view predicts separable effects of frequency and predictability in human language comprehension.", "Other models (Hale, 2001; Norris, 2006; Levy, 2008; Rasmussen and Schuler, 2018) posit no such context-independent retrieval mechanism, and instead propose a uni-fied comprehension mechanism that incrementally reallocates resources between possible interpretations of the unfolding sentence, with processing cost proportional to the amount of information (resource reallocation) contributed by each new word.", "Such a view predicts no separable effects of frequency and predictability because lexical frequencies are subsumed into the incremental probability model.", "Consistently with the first hypothesis, previous studies have shown separable additive effects of frequency and predictability by factorially manipulating corpus frequency and cloze predictability (Rayner et al., 2004; Ashby et al., 2005; Gollan et al., 2011; Staub and Benatar, 2013, see Staub, 2015 for a review).", "However, cloze estimates poorly distinguish degrees of low contextual probability (Smith and Levy, 2013), and constructed stimuli, while affording direct control over linguistic variables, may fail to reflect the typical distributional characteristics of the language, lack context, and/or inadvertently trigger suspension of the usual processes of pragmatic inference due to the absence of an overarching discourse (Demberg and Keller, 2008; Hasson and Honey, 2012; Shain et al., 2018).", "It is therefore not yet clear whether frequency and predictability effects can be separated in a more realistic setting.", "Concerns about the ecological validity of constructed stimuli can be addressed by the use of naturalistic stimuli (e.g. stories, newspaper articles, persuasive pieces, etc.).", "Naturalistic experiments are therefore an important complement to constructed experiments in the study of cognitive processes (Hasson and Honey, 2012).", "However, naturalistic experiments introduce their own challenges.", "Without the ability to factorially manipulate frequency and predictability, naturalistic studies must confront the natural collinearity between these two variables in ordinary language (Demberg and Keller, 2008).", "Furthermore, because naturalistic stimuli do not de-fine a critical region of the stimulus, responses are generally modeled word-by-word (Demberg and Keller, 2008; Frank and Bod, 2011; Smith and Levy, 2013; van Schijndel and Schuler, 2015).", "It is standard psycholinguistic practice to do so through ablative likelihood ratio testing (LRT) of linear mixed effects regression (LMER) models (Bates et al., 2015) fitted to the dependent variable of interest (e.g. fixation duration) (Demberg and Keller, 2008; Frank and Bod, 2011; van Schijndel and Schuler, 2015; Shain et al., 2016).", "However, this approach has important disadvantages.", "First, naturalistic data constitute time series that may violate the independence assumptions of linear regression and therefore confound model interpretation and hypothesis testing (Baayen et al., 2017, 2018; Shain and Schuler, 2018).", "One major such confound is temporal diffusion (i.e. a lingering response to stimuli), which can be brought under statistical control through deconvolutional time series regression (DTSR) models that directly estimate temporal structure in the relationships between predictors and response (Shain and Schuler, 2018).", "Second, LRT implicitly evaluates on in-sample data, making it challenging to diagnose overfitting and to assess external validity (Vasishth et al., 2018).", "This can be addressed through out-of-sample non-parametric tests, such as the paired permutation test widely used in machine learning (Demsar, 2006).", "This paper seeks to complement constructed stimulus experiments by searching for separable effects of frequency and predictability during naturalistic reading, using methods designed to ad-Effect", "dress the challenges of Section 2.2.", "The problem of temporal diffusion is addressed by using DTSR models rather than LMER (see Appendix A for implementation details).", "The problem of external validity is addressed by using held-out paired permutation testing rather than LRT, thus basing the hypothesis test directly on generalization error.", "The possibility that cloze probabilities are poor estimates of predictability for low-frequency words is addressed by operationalizing predictability as 5-gram surprisal generated by a large-vocabulary statistical language model.", "The natural collinearity of frequency and predictability is addressed through the use of large-scale data that should permit subtle differentiation of collinear effects.", "Taken together, the corpora examined in this study contain over one million fixations generated by 243 human subjects.", "Although there is a large-magnitude correlation between unigram log probability (frequency) and 5-gram surprisal (predictability) in these corpora, as shown in Table 2, synthetic experiments show that DTSR can faithfully identify models from much smaller data than that used here, even when all predictors are correlated at the 0.75 level (Shain, 2018).", "Given the size of the data, failure to distinguish effects of frequency and predictability would raise doubts about the existence of such a separation in naturalistic reading.", "DTSR models are fitted separately to each of the Natural Stories (Futrell et al., 2018), Dundee (Kennedy et al., 2003), and UCL (Frank et al.,", "2013) corpora.", "1 Following previous investigations of this question (Rayner et al., 2004; Ashby et al., 2005; Gollan et al., 2011, inter alia ), frequency is estimated from corpus statistics in this case, KenLM (Heafield et al., 2013) unigram models trained on the Gigaword 3 corpus (Graff and Cieri, 2003).", "Unlike previous studies using close estimates of predictability (Rayner et al., 2004; Ashby et al., 2005; Gollan et al., 2011, inter alia ), predictability is statistically estimated, again using KenLM models (5-gram) trained on Gigaword 3.", "This is both because (1) cloze norming all words contained in thousands of naturalistic sentences is prohibitive and (2) statistical language models trained on large data can more reliably differentiate low probability continuations (Smith and Levy, 2013).", "Following recent work on prediction effects in naturalistic sentence comprehension (Demberg and Keller, 2008; Frank and Bod, 2011; Smith and Levy, 2013), predictability estimates are encoded as surprisal by negating the 5-gram log probabilities.", "The models assume ShiftedGamma impulse response functions (Shain and Schuler, 2018, see Appendix A) for each of these variables, as well as for the nuisance variables word length , saccade length and an indicator variable for whether the previous word was fixated.", "2 To capture trends in the response at different timescales, the mod-1 Natural Stories is a self-paced reading corpus containing 848,768 word fixations from 181 subjects reading narrative and informational texts.", "Dundee is an eye-tracking corpus containing 260,065 word fixations from 10 subjects reading newspaper editorials.", "UCL is an eye-tracking corpus containing 53,070 fixations from 42 subjects reading sentences taken from novels by amateur authors.", "Although the sentences in UCL were randomized and presented in isolation and therefore subject to some of the concerns about constructed stimuli raised in Section 2 they are included here because the stimuli are naturally occurring rather than constructed for a particular experimental purpose.", "The UCL results replicate the overall pattern of significance (Table 3), and excluding them has no impact on the overall results.", "2 The variables saccade length and previous was fixated are only used for eye-tracking since they are not relevant to self-paced reading.", "els also include linear effects for the word's index in the sentence ( sentence position ) and document ( trial ).", "Following Shain and Schuler (2018), in addition to the intercept, the models contain a convolved intercept ( rate ) designed to capture effects of stimulus timing.", "The response used in all corpora is log fixation duration (go-past for eye-tracking).", "3 Outlier filtering is performed in each corpus following the procedures described in Shain and Schuler (2018).", "Approximately half the data in each corpus is used for training, with the remaining half reserved for held-out evaluation.", "Models include by-subject random intercepts as well as by-subject random slopes and impulse response parameters for each predictor.", "4 Held-out hypothesis testing uses a diamond ablative structure first ablating fixed effects for 5-gram surprisal and unigram log probability individually and then ablating both.", "All random effects are retained in all models.", "Comparisons use paired permutation tests of the by-item losses on the evaluation set, pooling across all corpora.", "5 Note that the non-parametric permutation test permits this pooling procedure to unify the models from all three corpora into a single test, since (unlike LRT) permutation testing supports out-of-sample comparison.", "Data processing was performed using the ModelBlocks toolchain (van Schijndel and Schuler, 2013), available at https://github.com/modelblocks/ modelblocks-release .", "Model fitting was performed using the DTSR software library (Shain and Schuler, 2018), available at https:// github.com/coryshain/dtsr .", "See the citations above for data access instructions.", "Effect estimates 6 from the full models are presented in Table 1 and pooled statistical comparisons are presented in the Pooled column of Table 3.", "If predictability and frequency effects are additive, all four comparisons in Table 3 should be significant.", "As shown, this is not the case.", "There is evidence that both frequency ( unigram log probability ) and predictability ( 5-gram surprisal ) in isolation reliably index processing difficulty, as shown by the significance of both effects over the baseline.", "However, when the effects are compared to each other, predictability explains significantly more variance than frequency but not vice versa.", "This general pattern of results further obtains for each corpus individually, as shown by the Corpus column breakdown in Table 3.", "One minor exception is that neither predictability nor frequency improves significantly over the other in Dundee.", "7 The Dundee results are nevertheless consistent with an interpretation in which frequency and predictability do not index distinct processing phenomena and inconsistent with an interpretation in which they do.", "These results thus provide no evidence of separable frequency and predictability effects, whether the corpora are considered together or individually.", "As described in Section 4, results show no evidence of separable effects of frequency and predictability in naturalistic reading.", "One possible explanation for this outcome is that 5-gram surprisal tracks human prediction effort better than cloze probabilities, in part because cloze probabilities are less reliable for infrequent words.", "Although countervailing evidence exists in the literature (e.g. Smith and Levy, 2011 found effects of cloze but not n -gram probabilities in human read-6 The estimated impulse response functions that underlie these effect sizes are plotted in Appendix B. 7 The p -value of 0.0105 observed for frequency over predictability does not achieve significance at the 0.05 level under 6-way Bonferroni correction (2 variables 3 corpora).", "ing times), in general this evidence is based on weak statistical competitors to cloze (e.g. Smith and Levy, 2011 used tri-grams).", "By contrast, recent trends in cognitive modeling point toward a correlation between the linguistic and psycholinguistic performance of language models, such that more powerful models with lower perplexity also tend to correlate more strongly with measures of cognitive effort (Goodkind and Bicknell, 2018; van Schijndel and Linzen, 2018).", "This suggests that apparent frequency effects may arise in part from poor estimates of predictability.", "Note that by using 5-gram surprisal rather than more powerful neural language models (Jozefowicz et al., 2016), the analysis described in this paper is conserva-tive in its attribution of variance to predictability.", "The failure of frequency is thus all the more compelling, since replacing 5-gram surprisal with surprisals obtained from more powerful language models would be unlikely to increase the explanatory power of frequency.", "Another potential explanation for the lack of separable effects of frequency and predictability is the use of naturalistic rather than constructed stimuli.", "Neuroscientific evidence shows that domain-general executive control regions activate during the processing of some artificially constructed language stimuli (Kaan and Swaab, 2002; Kuperberg et al., 2003; Novick et al., 2005; January et al., 2009) but fail to activate during the processing of naturalistic stimuli (Blank and Fedorenko, 2017).", "Such results have led some to argue that artificially constructed experimental stimuli may increase general cognitive load by coercing comprehension into problem solving, thereby engaging mechanisms that play little role in everyday sentence processing (Campbell and Tyler, 2018, We-hbe et al., in prep; Diashek et al., in prep).", "It is possible that the language comprehension mechanisms that implement linguistic prediction (Shain et al., under review) are relatively less engaged while domain general executive control mechanisms are relatively more engaged during the processing of constructed stimuli presented without context, perhaps suppressing the influence of preceding words on participants' reading behavior.", "Further investigation is needed in order to explore this hypothesis.", "In any case, it is a statistical truism that negative results do not motivate acceptance of the null hypothesis.", "Thus, it is possible that frequency effects exist in naturalistic reading but are too small to be detected here.", "Nevertheless, the failure to find frequency effects in large naturalistic data indicates that any such effects are greatly attenuated in the processing of naturalistic texts in comparison to the processing of constructed stimuli, which circumscribes the importance that any such effects might have in driving comprehension effort during typical reading.", "This paper explored whether effects of word frequency and predictability are distinguishable in naturalistic sentence processing.", "Despite the size of the combined dataset, results showed no evidence of separable effects in naturalistic reading, contrary to previous findings of separable effects in studies using constructed stimuli.", "This investigation thus shows no evidence of a distinct, context-independent lexical retrieval mechanism modulated by strength of memory representation (Seidenberg and McClelland, 1989; Coltheart et al., 2001; Harm and Seidenberg, 2004), and instead favors a view in which sentence processing effort is driven by a mechanism that incrementally reallocates resources between competing interpretations, subsuming any effects of raw lexical frequency (Norris, 2006; Levy, 2008; Rasmussen and Schuler, 2018).", "The discrepancy between constructed and naturalistic experimental settings presents a puzzle for our understanding of the mental processes that underlie human language comprehension, and is perhaps linked to recent evidence that artificially constructed linguistic stimuli can spuriously engage non-linguistic executive mechanisms by increasing general cognitive load as compared to naturalistic settings (Blank and Fedorenko, 2017; Campbell and Tyler, 2018).", "Further investigation into the precise sources of the discrepancy may shed new light on the interplay between prediction and memory in human sentence processing.", "The author would like to thank the anonymous reviewers for their helpful comments.", "This work was supported by National Science Foundation grants #1551313 and #1816891.", "All views expressed are those of the author and do not necessarily reflect the views of the National Science Foundation." ]
[ "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "other", "method", "abstain", "other", "other", "other" ]
[ "Although the existing Named Entity Recognition (NER) models have achieved promising performance, they suffer from certain drawbacks.", "The sequence labeling-based NER models do not perform well in recognizing long entities as they focus only on word-level information, while the segment-based NER models which focus on processing segment instead of single word are unable to capture the word-level dependencies within the segment.", "Moreover, as boundary detection and type prediction may cooperate with each other for the NER task, it is also important for the two subtasks to mutually reinforce each other by sharing their information.", "In this paper, we propose a novel Modularized Interaction Network (MIN) model which utilizes both segment-level information and word-level dependencies, and incorporates an interaction mechanism to support information sharing between boundary detection and type prediction to enhance the performance for the NER task.", "We have conducted extensive experiments based on three NER benchmark datasets.", "The performance results have shown that the proposed MIN model has outperformed the current state-of-the-art models.", "Named Entity Recognition (NER) is one of the fundamental tasks in natural language processing (NLP) that intends to find and classify the type of a named entity in text such as person (PER), location (LOC) or organization (ORG).", "It has been widely used for many downstream applications such as relation extraction (Xiong et al., 2018), entity linking (Gupta et al., 2017), question generation (Zhou et al., 2017) and coreference resolution (Barhom et al., 2019).", "Currently, there are two types of methods for the NER task.", "The first one is sequence labeling-based methods (Lample et al., 2016; Chiu and Nichols, 2016; Luo et al., 2020), in which each word in a sentence is assigned a special label (e.g., B-PER or I-PER).", "Such methods can capture the dependencies between adjacent word-level labels and maximize the probability of predicted labels over the whole sentence.", "It has achieved the state-of-the-art performance in various datasets over the years.", "However, NER is a segment-level recognition task.", "As such, the sequence labeling-based models which focus only on word-level information do not perform well especially in recognizing long entities (Ye and Ling, 2018).", "Recently, segment-based methods (Kong et al., 2016; Li et al., 2020b; Yu et al., 2020b; Li et al., 2021) have gained popularity for the NER task.", "They process segment (i.e., a span of words) instead of single word as the basic unit and assign a special label (e.g., PER, ORG or LOC) to each segment.", "As these methods adopt segment-level processing, they are capable of recognizing long entities.", "However, the word-level dependencies within a segment are usually ignored.", "NER aims at detecting the entity boundaries and the type of a named entity in text.", "As such, the NER task generally contains two separate and inde-pendent sub-tasks on boundary detection and type prediction.", "However, from our experiments, we observe that the boundary detection and type prediction sub-tasks are actually correlated.", "In other words, the two sub-tasks can interact and mutually reinforce each other by sharing their information.", "Consider the following example sentence: Emmy Rossum was from New York University.", "If we know University is an entity boundary, it will be more accurate to predict the corresponding entity type to be ORG.", "Similarly, if we know an entity has an ORG type, it will be more accurate to predict that University is the end boundary of the entity New York University instead of York (which is the end boundary for the entity New York).", "However, sequence labeling-based models consider the boundary and type as labels, and thus such information cannot be shared between the subtasks to improve the accuracy.", "On the other hand, segment-based models first detect the segments and then classify them into the corresponding types.", "These methods generally cannot use entity type information in the process of segment detection and may have errors when passing such information from segment detection to segment classification.", "In this paper, we propose a Modularized Interaction Network (MIN) model which consists of the NER Module, Boundary Module, Type Module and Interaction Mechanism for the NER task.", "To tackle the issue on recognizing long entities in sequence labeling-based models and the issue of utilizing word-level dependencies within a segment in segment-based models, we incorporate a pointer network (Vinyals et al., 2015) into the Boundary Module as the decoder to capture segment-level information on each word.", "Then, these segment-level information and the corresponding word-level information on each word are concatenated as the input to the sequence labeling-based models.", "To enable interaction information, we propose to separate the NER task into the boundary detection and type prediction sub-tasks to enhance the performance of the two sub-tasks by sharing the information from each sub-task.", "Specifically, we use two different encoders to extract their distinct contextual representations from the two sub-tasks and propose an Interaction Mechanism to mutually reinforce each other.", "Finally, these information are fused into the NER Module to enhance the performance.", "In addition, the NER Module, Boundary Module and Type Module share the same word representations and we apply multitask training when training the proposed MIN model.", "In summary, the main contributions of this paper include: We propose a novel Modularized Interaction Network (MIN) model which utilizes both the segment-level information from segment-based models and word-level dependencies from sequence labeling-based models in order to enhance the performance of the NER task.", "The proposed MIN model consists of the NER Module, Boundary Module, Type Module and Interaction Mechanism.", "We propose to separate boundary detection and type prediction into two sub-tasks and the Interaction Mechanism is incorporated to enable information sharing between the two sub-tasks to achieve the state-of-the-art performance.", "We conduct extensive experiments on three NER benchmark datasets, namely CoNLL2003, WNUT2017 and JNLPBA, to evaluate the performance of the proposed MIN model.", "The experimental results have shown that our MIN model has achieved the state-of-the-art performance and outperforms the existing neural-based NER models.", "In this section, we review the related work on the current approaches for Named Entity Recognition (NER).", "These approaches can be categorized into sequence labeling-based NER and segment-based NER.", "Sequence labeling-based NER is regarded as a sequence labeling task, where each word in a sentence is assigned a special label (e.g., B-PER, I-PER).", "Huang et al. (Huang et al., 2015) utilized the BiLSTM as an encoder to learn the contextual representation of words, and then Conditional Random Fields (CRFs) was used as a decoder to label the words.", "It has achieved the state-of-the-art results on various datasets for the past many years.", "Inspired by the success of the BiLSTM-CRF architecture, many other state-of-the-art models have adopted such architecture.", "Chiu and Nichols (Chiu and Nichols, 2016) used Convolutional Neural Network (CNN) to capture spelling features, and the character-level and word-level embeddings are concatenated as the input of BiLSTM with CRF network.", "Further, Lample et al. (Lample et al., 2016) proposed RNN-BiLSTM-CRF as an alternative.", "More recently, pretrained language models such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have been adopted to further enhance the performance of NER.", "Segment-based NER identifies segments in a sentence and classifies each segment with a special label (e.g., PER, ORG or LOC).", "Kong et al. (Kong et al., 2016) used BiLSTM to map arbitrary-length segment into a fixed-length vector, and then these vectors were passed to Semi-Markov Conditional Random Fields (Semi-CRFs) for labeling the segments.", "Zhuo et al. (Zhuo et al., 2016) adopted a gated recursive Convolutional Neural Network instead of BiLSTM to build a pyramid-like structure for extracting segment-level features in a hierarchical way.", "In recent years, Ye et al. (Ye and Ling, 2018) exploited the weighted sum of word-level within segment to learn segment-level features with Semi-CRFs which was then trained jointly on word-level with the BiLSTM-CRF network.", "Li et al. (Li et al., 2020a) used a recurrent neural network encoder-decoder framework with a pointer network to detect entity segments.", "Li et al. (Li et al., 2020b) treated NER as a machine reading comprehension (MRC) task, where entities were extracted as retrieved answer spans.", "Yu et al. (Yu et al., 2020b) ranked all the spans in terms of the pairs of start and end tokens in a sentence using a biaffine model.", "This section presents our proposed Modularized Interaction Network (MIN) for NER.", "The overall model architecture is shown in Figure", "1(a), which consists of the NER Module, Boundary Module, Type Module and Interaction Mechanism.", "In the NER Module, we adopt the RNN-BiLSTM-CRF model (Lample et al., 2016) as our backbone, which consists of three components: word representation, BiLSTM encoder and CRF decoder.", "Word Representation Given an input sentence S = < w 1 , w 2 , , w n > , each word w i (1 i n ) is represented by concatenating a word-level embedding x w i and a character-level word embedding x ci as follows: x i = [ x wi ; x ci ] (1) where x wi is the pre-trained word embedding, and the character-level word embedding x ci is obtained with a BiLSTM to capture the orthographic and morphological information.", "It considers each character in the word as a vector, and then inputs them to a BiLSTM to learn the hidden states.", "The final hidden states from the forward and backward outputs are concatenated as the character-level word information.", "BiLSTM Encoder The distributed word embeddings X = < x 1 , x 2 , , x n > are then fed into the BiLSTM encoder to extract the hidden sequences H = < h 1 , h 2 , , h n > of all words as follows: h i = (cid:104) h i ; h i (cid:105) h i = LST M (cid:16) x i , h i 1 (cid:17) h i = LST M (cid:16) x i , h i 1 (cid:17) (2) In the NER Module, we fuse the distinct contextual boundary representation and type representation for the NER task.", "In addition, we also fuse the segment information from the Boundary Module to support the recognition of long entities.", "Note that the boundary information and type information can mutually reinforce each other.", "Thus, we use an interaction mechanism to reinforce them before fusing these information in the NER Module.", "Instead of directly concatenating these information with hidden representations in the NER module, we follow the previous studies (Zhang et al., 2018; Yu et al., 2020a) to use a gate function to dynamically control the amount of information flowing by infusing the expedient part while excluding the irrelevant part.", "The gate function uses the information from the NER Module to guide the process, which is described formally as follows: H Bdy , H Type = interact ( H Bdy , H Type ) HB = (cid:16) W (cid:62) 1 H + W (cid:62) BH Bdy (cid:17) H Bdy HT = (cid:16) W (cid:62) 2 H + W (cid:62) TH Type (cid:17) H Type HS = (cid:16) W (cid:62) 3 H + W (cid:62) SH Seg (cid:17) H Seg (3) where H Bdy and H Type represent the distinct representations of hidden sequences from the Boundary Module and Type Module respectively, and H Seg represents the segment information from the Boundary Module.", "We will discuss them in Section 3.2 and Section 3.3.", "H Bdy and H Type represent the distinct representations of hidden sequences from the Boundary Module and Type Module respectively after the interaction using an interaction mechanism interact ( , ) , and we will discuss them in Section 3.4.", "HB , HT and HS represent the boundary, type and segment information respectively to be injected into the NER Module from the gate function.", "denotes the logistic Word Representation BiLSTM Hidden Sequences BiLSTM Hidden Sequences CRF SegmentInformation NER Module Boundary Module Decode BiLSTM Hidden Sequences CRF Type Module ...", "multiplication.", "The final hidden representations in the NER Module are as follows: HNER = W (cid:62) [ H ; HB ; HT ; HS ] + b (4) CRF Decoder CRF has been widely used in the state-of-the-art NER models (Chiu and Nichols, 2016; Lample et al., 2016) to model tagging decisions when considering strong connections between output tags.", "For an input sentence S = < w 1 , w 2 , , w n > , the score of a predicted sequence of labels y = < y 1 , y 2 , , y n > is defined as follows: sc ( S, y ) = n (cid:88) i =0 T y i ,y i +1 + n (cid:88) i =1 P i,y i (5) where T y i ,y i +1 represents the score of a transition from y i to y i +1 , and P i,y i is the score of the y i tag of the i th word in a sentence.", "The CRF model describes the probability of predicted labels y over all possible tag sequences in the set Y , that is: p ( y | S ) = e sc ( S,y ) (cid:80) (cid:101) y Y e sc ( S, (cid:101) y ) (6) We maximize the log-probability of the correct sequence of labels during the training.", "During decoding, we predict the label sequence with the maximum score: y = arg max (cid:101) y Y sc ( S, (cid:101) y ) (7) 3.2 Boundary Module The Boundary Module needs to provide not only distinct contextual boundary information but also segment information for the NER Module.", "Here, we use another BiLSTM as encoder to extract distinct contextual boundary information.", "And inspired by BDRYBOT (Li et al., 2020a), a recurrent neural network encoder-decoder framework with a pointer network is used to detect entity segments for segment information.", "The BDRYBOT model processes the starting boundary word in an entity to point to the corresponding ending boundary word.", "The other entity words in the entity are skipped.", "The non-entity words are pointed to a specific position.", "This method has achieved promising results in the boundary detection task.", "However, due to the variable length of entities, this model is deprived of the power of batch training.", "In addition, as the segment information of each word in an entity is the same as the starting boundary word, the segment information for all the words within a segment will be incorrect if the starting boundary word is detected wrongly.", "To avoid this problem, we improve the training process and propose a novel method to capture the segment information of each word.", "We train the starting boundary word to point to the corresponding ending boundary word, and the other words in the sentence to a sentinel word inactive .", "The process is shown in Figure", "1(b).", "Specifically, we use another BiLSTM as encoder to obtain the distinct boundary hidden sequences H Bdy = < h Bdy 1 , h Bdy 2 , , h Bdyn > , and a sentinel vector is padded into the last positions of hidden sequences H Bdy for the sentinel word inactive .", "Then, a unidirectional LSTM is used as a decoder to generate the decoded state d j at each time step j .", "To add extra information to the input of the LSTM, we follow (Fernandez-Gonzalez and Gomez-Rodrguez, 2020) and use the sum of the hidden states of current ( h Bdyi ), previous ( h Bdyi 1 ) and next ( h Bdyi +1 ) words instead of word embedding as the input to the decoder as follows: s j = h Bdyj 1 + h Bdyj + h Bdyj +1 d j = LST M ( s j , d j 1 ) (8) Note that the first word and last word do not have hidden states of previous and next, we use zero vectors to represent it which are shown as grey blocks in Figure", "1(b).", "After that, we use the biaffine attention mechanism (Dozat and Manning, 2017) to generate a feature representation for each possible boundary position i at time step j , and the Softmax function is used to obtain the probability of word w i for determining an entity segment that starts with word w j and ends with word w i .", "where W is the weight matrix of bi-linear term, U and V are the weight matrices of linear terms, b is the bias vector and i [ j, n + 1] indicates a possible position in decoding.", "Different from the existing methods (Zhuo et al., 2016; Sohrab and Miwa, 2018) that enumerate all segments starting with word w j with equal importance, we use the probability p ( w i | w j ) as the con-fidence of the segment that starts with word w j and ends with word w i , and then all these segments under the probability p ( w i | w j ) are summed as the segment information of word w j .", "H Segj = n (cid:88) i = j p ( w i | w j ) h pj,i h pj,i = [ h Bdyj ; h Bdyi ; h Bdyi h Bdyj ; h Bdyi (cid:12) h Bdyj ] (10) where h pj,i is the representation of the segment that starts with word w j and ends with word w i , and (cid:12) is element-wise product.", "For the Type Module, we use the same network structure as in the NER Module.", "Given the shared input X = < x 1 , x 2 , , x n > , BiLSTM is used to extract distinct contextual type information H Type = < h Type 1 , h Type 2 , , h Typen > , and then CRF is used to tag type labels.", "As discussed in Section 1, the boundary information and type information can mutually reinforce each other.", "We first follow (Cui and Zhang, 2019; Qin et al., 2021) and use a self-attention mechanism over each sub-task labels to obtain the explicit label representations.", "Then, we concatenate these representations and contextual information of corresponding sub-tasks to get label-enhanced contextual information.", "For the i th label-enhanced boundary contextual representation h B E i , we first use the biaffine attention mechanism (Dozat and Manning, 2017) to grasp the attention scores between h B E i and the label-enhanced type contextual information < h T E 1 , h T E 2 , , h T E n > .", "The attention scores < B E i, 1 , B E i, 2 , , B E i,n > are computed in the same way as in Equation (9).", "Then, we concatenate the i th label-enhanced boundary representation h B E i and the interaction representation r B E i by considering the type information as its updated boundary representation: r B E i = n (cid:88) j =1 B E i,j h T E j h Bdyi = [ h B E i , r B E i ] (11) Similarity, we can obtain the updated type representation h Typei by considering the boundary information.", "There are three modules in our proposed MIN model: NER Module, Boundary Module and Type Module.", "They share the same word representations.", "Thus, the whole model can be trained with multitask training.", "During training, we minimize the negative log-probability of the correct sequence of labels in Equation (6) for the NER Module and Type Module, while the cross-entropy loss is used for the Boundary Module: LNER = log (cid:0) p (cid:0) y NER | X (cid:1)(cid:1) L Type = log (cid:0) p (cid:0) y Type | X (cid:1)(cid:1) L Bdy = 1 n n (cid:88) i =1 y Bdyi log p Bdyi (12) where X represents input sequence, and y NER and y Type represent the correct sequence of labels for the NER Module and Type Module respectively.", "p Bdyi is the probability distribution of the gold label and y Bdyi is the gold one-hot vector for the Boundary Module.", "Then, the final multitask loss is a weighted sum of the three losses: L = LNER + L Type + L Bdy (13) 4 Experiments In this section, we first introduce the datasets, baseline models and implementation details.", "Then, we present the experimental results on three benchmark datasets.", "Moreover, an ablation study is also conducted.", "Finally, we give some insights on further analysis.", "We evaluate the proposed model on three benchmark NER datasets: CoNLL2003 (Sang and De Meulder, 2003), WNUT2017 (Derczynski et al., 2017) and JNLPBA (Kim et al., 2004).", "CoNLL2003 It is collected from Reuters news articles.", "Four different types of named entities including PER , LOC , ORG and MISC are defined by the CoNLL 2003 NER shared task.", "WNUT2017 It is a set of noisy user-generated text including YouTube comments, StackExchange posts, Twitter text, and Red-dit comments.", "Six types of entities including PER , LOC , Group , Creative work , Corporation and Product are annotated.", "JNLPBA It is collected from MEDLINE abstracts.", "Five types of entities including DNA , RNA , protein , cell line and cell type are annotated.", "We compare the proposed MIN model with several baseline models including sequence labeling-based models and segment-based models.", "The compared sequence labeling-based models include: CNN-BiLSTM-CRF (Chiu and Nichols, 2016) This model utilizes CNN to capture character-level word features, and then the character-level and word-level embeddings are concatenated as the input to the BiLSTM-CRF network.", "It is a classical baseline for NER.", "RNN-BiLSTM-CRF (Lample et al., 2016) This model uses RNN instead of CNN in CNN-BiLSTM-CRF.", "ELMo (Peters et al., 2018) This model uses a deep bidirectional language model to learn contextualized word representation on a large text corpus, which is then fed into BiLSTM-CRF for NER.", "Flair (Akbik et al., 2018) This model uses BiLSTM-CRF with character-level contextualized representations for NER.", "BERT (Devlin et al., 2019) This model learns contextualized word representation based on a bidirectional Transformer, which is then fed into BiLSTM-CRF for NER.", "HCRA (Luo et al., 2020) This model uses sentence-level and document-level representations to augment the contextualized representation based on a funnel-shaped CNN with BiLSTM-CRF for NER.", "BiLSTM-Pointer 1 (Li et al., 2020a) This model uses BiLSTM as the encoder and another unidirectional LSTM with pointer networks as the decoder for entity boundary detection.", "Then, the entity segments generated by the decoder are classified with the Softmax classifier for NER.", "HSCRF (Ye and Ling, 2018) This model exploits the weighted sum of word-level within segment to learn segment-level features with Semi-CRFs which is then trained jointly on word-level with the BiLSTM-CRF network.", "1 In (Li et al., 2020a), the pointer networks is used for detecting entity boundaries only.", "We reproduce this work and add a Softmax layer for the NER task.", "MRC+BERT (Li et al., 2020b) This model formulates the NER task as a machine reading comprehension task.", "Biaffine+BERT (Yu et al., 2020b) This model ranks all the spans in terms of the pairs of start and end tokens in a sentence using a biaffine model.", "Our proposed MIN model is implemented with the PyTorch framework.", "We use 100-dimensional pre-trained Glove word embeddings 2 (Pennington et al., 2014).", "The char embeddings is initialized randomly as 25-dimensional vectors.", "When training the model, both of the embeddings are updated along with other parameters.", "We use Adam optimizer (Kingma and Ba, 2014) for training with a mini-batch.", "The initial learning rate is set to 0.01 and will shrunk by 5% after each epoch, dropout rate to 0.5, the hidden layer size to 100, and the gradient clipping to 5.", "We report the results based on the best performance on the development set.", "All of our experiments are conducted on the same machine with 8-cores of Intel(R) Xeon(R) E5-1630 [email protected] and two Nvidia GeForce-GTX GPU.", "Following the work in (Ye and Ling, 2018), the maximum segment length for segment information discussed in Section 3.2 is set to 6 for better computational efficiency.", "Table 2 shows the experimental results of our proposed MIN model and the baseline models.", "In Table 2, when compared with models without using any language models or external knowledge, we observe that our MIN model outperforms all the compared baseline models in terms of precision, recall and F1 scores, and achieves 0.57%, 4.77% and 3.26% improvements on F1 scores for the CoNLL2003, WNUT2017 and JNLPBA datasets respectively.", "Among the compared models, the F1 scores of the BiLSTM-Pointer model are generally lower than other models.", "This is because it does not utilize the word-level dependencies within a segment and also suffers from the problem on boundary error propagation during boundary detection and type prediction.", "The CNN-BiLSTM-CRF and 2 http://nlp.stanford.edu/projects/ glove/ RNN-BiLSTM-CRF models have achieved similar performance results on the three datasets, which perform worse than that of HCRA and HSCRF.", "The HCRA model uses sentence-level and document-level representations to augment the contextualized word representation, while the HSCRF model considers the segment-level and word-level information with multitask training.", "However, the HCRA model does not consider the segment-level information, and the HSCRF model does not model directly the word-level dependencies within a segment.", "In addition, all the above models do not share information between the boundary detection and type prediction sub-tasks.", "Our MIN model has achieved the best performance as it is capable of considering all these information.", "When pre-trained language models such as ELMo and BERT are incorporated, all the models have achieved better performance results.", "In particular, we observe that our MIN model has achieved 0.95%, 3.83% and 2.73% improvements on the F1 scores for the CoNLL2003, WNUT2017 and JNLPBA datasets respectively when compared with the other models.", "The results are consistent with what have been discussed in models without using any pre-trained language models.", "To show the importance of each component in our proposed MIN model, we conduct an ablation experiment on the Boundary Module, Type Module and Interaction Mechanism.", "As shown in Table 3, we can see that all these components contribute sig-nificantly to the effectiveness of our MIN model.", "The discussion on the effectiveness of each component is given with respect to the three datasets.", "The Boundary Module improves the F1 scores by 1.13%, 3.58% and 2.1% for CoNLL2003, WNUT2017 and JNLPBA respectively.", "This is because it not only provides segment-level information for the NER Module but also provides the boundary information for the Type Module.", "As such, it helps recognize long entities and predict the entity types more accurately.", "The Type Module improves the F1 scores by 1.02%, 2.81% and 1.42% for CoNLL2003, WNUT2017 and JNLPBA respectively.", "This is because it provides the type information for the Boundary Module which can help detect entity boundaries more accurately.", "In addition, it can also help obtain more effective segment information.", "The Interaction Mechanism has achieved 0.54%, 1.86% and 0.72% improvements on F1 scores for CoNLL2003, WNUT2017 and JNLPBA respectively.", "As it bridges the gap between the Boundary Module and Type Module for information interaction and sharing, it can help improve the performance of boundary detection and type prediction simultaneously.", "Overall, the different components of the proposed model can work effectively with each other with multitask training and enable the model achieve the state-of-the-art performance for the NER task.", "As our proposed MIN model is capable of recognizing long entities, we compare the performance of our MIN model with RNN-BiLSTM-CRF and HSCRF.", "Note that the RNN-BiLSTM-CRF model is the base model used in our MIN model.", "And the HSCRF model also considers the segment-level and word-level information with multitask training.", "The results are shown in Figure 2.", "The experiment is conducted on the CoNLL2003 test dataset.", "We follow the setting in (Ye and Ling, 2018) and group the data according to the number of entities from 1 to 6 in a sentence.", "We observe that our MIN model and the HSCRF model consistently 1 2 3 4 5 6 Entity length 70 75 80 85 90 95 F 1 S c o r e s ( % ) MIN HSCRF RNN-BiLSTM-CRF Figure 2: Performance against entity length.", "outperform RNN-BiLSTM-CRF in each group.", "In particular, the improvement is obvious when the entity length is longer than 4 because both our MIN model and the HSCRF model consider the segment-level information.", "However, our MIN model performs better than the HSCRF model in each group.", "More specifically, when the entity length is longer than 4, our MIN model has great improvement over HSCRF.", "This is because the HSCRF model directly uses segment-level features with Semi-CRFs to tag the segments, which ignore word-level dependencies within the segment.", "In contrast, our MIN model combines segment-level information with word-level dependencies within a segment for the NER task.", "In this paper, we have proposed a novel Modularized Interaction Network (MIN) model for the NER task.", "The proposed MIN model utilizes both segment-level information and word-level dependencies, and incorporates an interaction mechanism to support information sharing between boundary detection and type prediction to enhance the performance for the NER task.", "We have conducted extensive experiments on three NER benchmark datasets.", "The experimental results have shown that our proposed MIN model has achieved the state-of-the-art performance.", "This research has been supported by the National Key R&D Program of China under Grant No. 2020AAA0106600, the National Natural Science Foundation of China under Grants No. 62062012 and 61976021, and the Ministry of Education (MoE) of Singapore under the Academic Research Fund (AcRF) Tier 1 Grant RG135/18." ]
[ "abstain", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "method", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "objective", "abstain", "method", "objective", "other" ]
[ "In this paper, we study the named entity recognition (NER) problem under distant supervision.", "Due to the incompleteness of the external dictionaries and/or knowledge bases, such distantly annotated training data usually suffer from a high false negative rate.", "To this end, we formulate the Distantly Supervised NER (DS-NER) problem via Multi-class Positive and Unlabeled (MPU) learning and propose a theoretically and practically novel CONFidence-based MPU ( Conf-MPU ) approach.", "To handle the incomplete annotations, Conf-MPU consists of two steps.", "First, a confidence score is estimated for each token of being an entity token.", "Then, the proposed Conf-MPU risk estimation is applied to train a multi-class classifier for the NER task.", "Thorough experiments on two benchmark datasets labeled by various external knowledge demonstrate the superiority of the proposed Conf-MPU over existing DS-NER methods.", "Our code is available at Github 1 .", "Named Entity Recognition (NER) aims to detect entity mentions from text and classify them into predefined types.", "It is a fundamental task in information extraction and many other downstream tasks (Gbor et al., 2018; Luan et al., 2017; Giorgi et al., 2019).", "However, the necessity of extensive human efforts to annotate a large amount of training data imposes restrictions on the state-of-the-art supervised deep learning methods, especially in professional fields.", "To address this problem, distantly supervised methods spring up (Yang et al., 2018; Shang et al., 2018; Mayhew et al., 2019; Cao et al., 2019; Peng et al., 2019; Liang et al., 2020; Liu et al., 2021; Zhang et al., 2021), which aim to train NER models using automatically annotated training data based on external knowledge such as dictionaries and 1 https://github.com/kangISU/Conf-MPU-DS-NER Figure 1: Distant labeling example for the entity type of Disease using a dictionary.", "knowledge bases.", "Observed by previous DS-NER methods (Shang et al., 2018; Peng et al., 2019), distant labels provided by reliable dictionaries are usually of high precision.", "However, such distant labeling suffers a major drawback incomplete labeling.", "This is due to the fact that most existing dictionaries and knowledge bases have limited coverage on entities.", "Hence simply treating unlabeled samples ( i.e. , unmatched tokens) as negative ones will introduce a high false negative rate ( e.g. , neutropenia in Figure 1) compared with human-annotated training data, and further mislead a supervised NER model to overfit to false negative samples and seriously affect its recall.", "Recently, binary Positive and Unlabeled (PU) learning is applied to DS-NER tasks for this challenge (Peng et al., 2019).", "PU learning performs classification using only limited labeled positive data and unlabeled data, thus naturally suitable for handling distant supervision, where external knowledge often has a limited coverage on positive samples.", "However, binary PU learning has several drawbacks in real DS-NER tasks.", "It applies the one-vs-all strategy to convert a multi-class classification problem into multiple binary classification problems, and thus suffers from two weaknesses.", "First, it is not efficient, especially in the case where there are many entity types.", "For a NER task with n entity types, n binary classifiers need to be trained.", "Second, the scale of predicted confidence values may differ among those binary classifiers, which may not guarantee a mutually beneficial inference for the final prediction (Bishop, 2006).", "Furthermore, the PU learning theory is built on a fundamental assumption of data distribution that unlabeled data can accurately reveal the overall distribution ( i.e. , the marginal distribution of the target field) (Bekker and Davis, 2020).", "In DS-NER tasks, the distantly annotated training data may not fit the assumption well: It depends on the coverage of used dictionaries or knowledge bases on the entities.", "Our empirical studies validate that violation of this assumption can significantly impact the performance of PU learning.", "To address these challenges in DS-NER tasks and PU learning, we propose a CONFidence-based Multi-class Positive and Unlabeled ( Conf-MPU ) learning framework.", "The proposed Conf-MPU can handle different levels of false negative rates brought by dictionaries of various coverage and does not overfit to the distantly labeled training data.", "It consists of two steps.", "Specifically, given the distantly labeled training data, we first carry out a token-level binary classification to estimate the confidence score (a probability value in [0 , 1] ) of a token being an entity token ( i.e. , a token of a named entity).", "Then, we perform the NER classification using a neural network model with the proposed Conf-MPU risk estimator, which incorporates the confidence scores obtained from the first step in the risk estimation, to alleviate the impact of annotation imperfection.", "It is worth noting that the two-step strategy of Conf-MPU needs to train only two classifiers for any DS-NER tasks with arbitrary number of entity types, which is more efficient than previous binary PU learning.", "In summary, our main contributions are: We propose Conf-MPU, a theoretically and practically novel approach for the DS-NER task.", "Conf-MPU enriches the PU learning theory with solid theoretical analysis.", "We verify that the practical use of traditional PU learning is subject to its theoretical assumption, which can be released by Conf-MPU.", "As far as we know, this is the first work specially dealing with such a practical problem.", "We empirically demonstrate that Conf-MPU with a two-step strategy can significantly alleviate the impact of incomplete annotations during the model training and outperform the state-of-the-art DS-NER methods on benchmark datasets.", "In this section, we briefly review the risk formulations of standard supervised learning and PU learning in the binary classification setting.", "Suppose that the data follow an unknown probability distribution with density p ( x , y ) .", "Let x X R d and y Y = { 0 , 1 } , where 0 and 1 indicate negative and positive classes, respectively.", "The goal is to learn a decision function f : X Y by minimizing the expected classification risk: R( f ) = R +P ( f ) + (1 )R N ( f ) .", "In this function, = p ( y = 1) is the prior of the positive class.", "R +P ( f ) = E x p ( x | y =1) [ ( f ( x ) , 1)] and R N ( f ) = E x p ( x | y =0) [ ( f ( x ) , 0)] denote the expected classification risks on the positive and negative classes, respectively, where E denotes expectation and its subscript indicates the data distribution on which the expectation is computed, and the loss function is represented by .", "In supervised learning setting, we are given both labeled positive and negative data that are sampled independently from p P ( x ) = p ( x | y = 1) and p N ( x ) = p ( x | y = 0) as XP = { x P j } n P j =1 and XN = { x N j } n N j =1 , respectively.", "Then Eq.", "1 can be estimated by R PN ( f ) = R + P ( f ) + (1 )R N ( f ) , where R +P ( f ) = 1 n P (cid:80) n P j =1 ( f ( x P j ) , 1) and R N ( f ) = 1 n N (cid:80) n N j =1 ( f ( x N j ) , 0) .", "In PU learning setting, we have only access to labeled positive data XP and unlabeled data XU = { x U j } n U j =1 drawn from p U ( x ) instead of labeled negative data XN , which indicates that the classification risk Eq.", "1 can not be directly estimated as done in supervised learning setting.", "For this problem, Du Plessis et al. (2014) propose the expected classification risk formulation of PU learning: R( f ) = R +P ( f ) + R U ( f ) R P ( f ) , (2) where R U ( f ) = E x p ( x ) [ ( f ( x ) , 0)] and R P ( f ) = E x p ( x | y =1) [ ( f ( x ) , 0)] .", "Here R U ( f ) R P ( f ) can alternatively represent (1 )R N ( f ) because p ( y = 0) p ( x | y = 0) = p ( x ) p ( y = 1) p ( x | y = 1) .", "PU learning assumes that unlabeled data XU can reflect the true overall distribution, that is, p U ( x ) = p ( x ) , due to unlabeled data consisting of both positive and negative data, under which Eq.", "2 can be 7199 approximated by R PU ( f ) = R +P ( f ) + R U ( f ) R P ( f ) , where R U ( f ) = 1 n U (cid:80) n U j =1 ( f ( x U j ) , 0) and R P ( f ) = 1 n P (cid:80) n P j =1 ( f ( x P j ) , 0) .", "Let y Y = { 0 , 1 , 2 , ..., k } , where 0 refers to the negative class and 1 , ..., k refer to k positive classes.", "The goal in multi-class classification is to minimize the following expected classification risk: R( f ) = k (cid:88) i =1 i R +P i ( f ) + (1 k (cid:88) i =1 i )R N ( f ) , (3) where R +P i ( f ) = E x p ( x | y = i ) [ ( f ( x ) , i )] and i = p ( y = i ) are the classification risk and the prior of the i -th positive class, respectively.", "We denote this classification risk as MPN.", "Following PU learning setting, there are only labeled positive data XP i = { x P i j } n P i j =1 drawn from p P i ( x ) = p ( x | y = i ) where i { 1 , ..., k } , and unlabeled data XU .", "Thus we can not directly estimate Eq.", "3. Here we adopt the same probability princi-ple as applied in binary PU learning to alternatively compute the risk on negative data.", "Since p ( y = 0) p ( x | y = 0) = p ( x ) (cid:80) ki =1 p ( y = i ) p ( x | y = i ) , we can further derive Eq.", "3 as: R( f ) = k (cid:88) i =1 i R +P i ( f ) + R U ( f ) k (cid:88) i =1 i R P i ( f ) , (4) where R U ( f ) (cid:80) ki =1 i R P i ( f ) theoretically plays the role of (1 (cid:80) ki =1 i )R N ( f ) , and R P i ( f ) = E x p P i ( x ) [ ( f ( x ) , 0)] .", "Specifically, MPU risk estimator is given as: R MPU ( f ) = k (cid:88) i =1 i n P i n P i (cid:88) j =1 ( f ( x P i j ) ,i ) + max (cid:26) 0 , 1 n U n U (cid:88) j =1 ( f ( x U j ) , 0) k (cid:88) i =1 i n P i n P i (cid:88) j =1 ( f ( x P i j ) , 0) (cid:27) , (5) with a non-negative constraint inspired by Kiryo et al. (2017) ensuring the risk on the negative class is non-negative.", "We denote this classification risk as MPU, whose estimation requires the same assumption of data distribution as binary PU learning does, namely, p U ( x ) = p ( x ) .", "We refer this assumption to PU assumption hereinafter for convenience.", "Under PU assumption, XU can be used to estimate R U ( f ) .", "However, PU assumption can be violated in real distant supervision scenarios.", "The distribution of unlabeled data p U ( x ) may be different from the overall distribution p ( x ) especially when the distant supervision has a good coverage.", "In such cases, the unlabeled data will have a distribution closer to the distribution of the true negative data p N ( x ) instead of the overall distribution p ( x ) .", "Thus, the risk estimation of R U ( f ) based on the assumption, in either MPU or binary PU, may be biased.", "To alleviate such estimation bias, we derive a novel Conf-MPU risk function from MPU.", "With the context of NER tasks, we observe that almost any combination of characters could be part of a named entity.", "Based on this observation, mathematically, we define ( x ) = p ( y > 0 | x ) to determine the confidence score of a token being an entity token, no matter what entity type it belongs to, and further assume that ( x ) > 0 .", "Under this assumption, we can further decompose R U ( f ) in Eq.", "4 by involving a threshold parameter 0 < 1 as follows: R U ( f ) = k (cid:88) i =1 i R P i ( f ) + R U ( f ) , (6) where R P i ( f ) = E x p P i ( x | ( x ) > ) (cid:104) ( f ( x ) , 0) 1 ( x ) (cid:105) and R U ( f ) = E x p ( x | ( x ) ) [ ( f ( x ) , 0)] .", "The detailed proof is shown as follows.", "Proof.", "Since ( x ) > 0 and 0 < 1 , we have R U ( f ) = E x p ( x ) [ ( f ( x ) , 0)] = (cid:90) ( x ) > ( f ( x ) , 0) p ( x ) d x + (cid:90) ( x ) ( f ( x ) , 0) p ( x ) d x = (cid:90) ( x ) > ( f ( x ) , 0) p ( x ) p ( x ,y > 0) p ( x ,y > 0) d x + R U ( f ) = (cid:90) ( x ) > ( f ( x ) , 0) p ( x ) p ( x | y > 0) p ( y > 0) p ( y > 0 | x ) p ( x ) d x + R U ( f ) = (cid:90) ( x ) > ( f ( x ) , 0) p ( x | y > 0) p ( y > 0) ( x ) d x + R U ( f ) = k (cid:88) i =1 p ( y = i ) (cid:90) ( x ) > ( f ( x ) , 0) p ( x | y = i ) ( x ) d x + R U ( f ) = k (cid:88) i =1 i (cid:90) ( x ) > ( f ( x ) , 0) 1 ( x ) p ( x | y = i ) d x + R U ( f ) = k (cid:88) i =1 i R P i ( f ) + R U ( f ) .", "Given a reliable and a proper , ( x ) > indicates x being an entity token (a positive sample), otherwise a non-entity token (a negative sample),", "which further induces that p P i ( x | ( x ) > ) p P i ( x ) , and p ( x | ( x ) ) p U ( x | ( x ) ) p N ( x ) even if p U ( x ) is different from p ( x )", "Thus, empirically R P i ( f ) and R U ( f ) can be estimated with less bias using XP i and XU , respectively, which further leads to a more precise estimation of R U ( f ) .", "This is the mechanism that Conf-MPU can significantly reduce estimation bias in practice, even if PU assumption is violated.", "Specifically, Conf-MPU risk estimator can be expressed as: R Conf MPU ( f ) = k (cid:88) i =1 i n P i n P i (cid:88) j =1 max (cid:26) 0 , ( f ( x P i j ) , i ) + 1 ( x P i j ) > ( f ( x P i j ) , 0) 1 ( x P i j ) ( f ( x P i j ) , 0) (cid:27) + 1 n U n U (cid:88) j =1 (cid:104) 1 ( x U j ) ( f ( x U j ) , 0) (cid:105) , (8) with a constraint to guarantee a non-negative loss on each labeled positive sample, where is an empirical confidence score estimator.", ".", "In DS-NER tasks, we formulate the sub-task of estimating ( x ) as a token-level binary classification problem which also uses distant labels.", "In practice, a classifier with a sigmoid output layer for this sub-task can guarantee ( x ) > 0 .", "Targeting on the challenge of high false negative rates in training data, we give the following analysis to offer some insights into the Conf-MPU risk estimator.", "For ease of expression, we use letters to denote the terms in Eq.", "(8): A = ( f ( x P i j ) , i ) , B = 1 ( x P i j ) > ( f ( x P i j ) , 0) 1 ( x P i j ) , C = ( f ( x P i j ) , 0) , D = 1 ( x U j ) ( f ( x U j ) , 0) .", "The threshold is set to 0.5 by default.", "We assume that ( x ) of an entity token is close to 1 ( i.e. , ( x ) > ), and ( x ) of a non-entity token is close to 0 ( i.e. , ( x ) ).", "For a true positive sample ( e.g. , sepsis distantly labeled in Figure 1), the loss is computed by A + B C , where B is involved because its confidence score is larger than the threshold.", "Since 1 / ( x ) is close to 1, B C is almost 0 but positive, and thus the loss on this sample approximately equals to A , which is very similar with the loss on a positive sample in standard supervised learning.", "For a true negative sample ( e.g. , patient unlabeled in Figure 1), the loss is calculated by D due to its confidence score is less than the threshold.", "So the minimization for D enables the model to learn from this true negative sample.", "For a false negative sample ( e.g. , neutropenia unlabeled in Figure 1), the loss is not counted, because its confidence score is larger than the threshold and thus D is not calculated.", "It is the mechanism that Conf-MPU handles false negative samples from unlabeled data.", "Here we establish an estimation error bound for the proposed Conf-MPU risk estimator (Eq. 8) to show the guaranteed performance.", "Theorem 1.", "Let f = arg min f F R( f ) and f Conf MPU = arg min f F R Conf MPU ( f ) .", "Assume that ( ) [0 , C l ] and is Lipschitz continuous on the interval [ C g , C g ] with a Lipschitz constant L l , where C l , C g > 0 .", "Also suppose that is a fixed function independent of data used to compute R Conf MPU ( f ) and (0 , 1] .", "Let = p ( ( x ) ) and = E x p ( x ) (cid:104) | ( x ) ( x ) | 2 (cid:105) .", "Then for any > 0 , with probability at least 1 , R( f Conf MPU ) R( f ) k (cid:88) i =1 2 i ( + 1) C l + k (cid:88) i =1 2 i 2 L l R n P i ,p P i ( x ) ( F ) + ( + 1) C l (cid:115) log k +1 2 n P i + 4 L l R n U ,p ( x ) ( F ) + 2 C l (cid:115) log k +1 2 n U + 2 C l (cid:112) (1 ) .", "In Theorem 1, F is the function class and R n P i ,p P i ( x ) ( F ) is the Rademacher complexity of the function class F for the sampling of size n P i from the distribution p P i ( x ) and R n U ,p ( x ) ( F ) follows a similar definition.", "We relegate this proof to the Appendix.", "In this section, we describe the setup for the DS-NER classification.", "In DS-NER tasks, professional dictionaries ( e.g. , UMLS) and knowledge bases ( e.g. , Wikidata) are used to automatically generate distant labels.", "Distant labeling by dictionaries employs some string matching algorithms to map training samples to dictionaries (Ren et al., 2015; Giannakopoulos et al., 2017; Peng et al., 2019), while knowledge bases utilize public APIs to perform such distant labeling.", "The proposed Conf-MPU risk estimator can be applied on any NER classifiers where the task is to predict the label for each token.", "For example, 7201 BERT (Devlin et al., 2019) can be used as the underlying NER model, and then the Conf-MPU risk estimation can be used to calculate the classification risks.", "We use Conf-MPU BERT to denote this method.", "BiLSTM (Chiu and Nichols, 2016) is another popular choice for NER models.", "Ratinov and Roth (2009); Passos et al. (2014); Chiu and Nichols (2016) demonstrate that using lexicons as external features can improve NER performance.", "With the dictionaries, we extract the lexicon features as follows.", "For each token, we match its contextual words within a window size against entries in the dictionaries.", "If there is any successful matching, a binary indicator is set to 1, otherwise 0.", "With the window size of n , we can form an n -bit vector, which is appended to the input embedding.", "We denote the model of BiLSTM with the lexicon feature engineering as LBiLSTM, and denote Conf-MPU LBiLSTM as the LBiLSTM-based classifier with Conf-MPU risk estimation.", "For the first step of estimating confidence scores, we build a token-level binary classifier ( i.e. , ) based on LBiLSTM to output scores.", "To be consistent with PU learning setting, this classifier is equipped with a binary PU learning risk estimator ( i.e. , R PU ( ) ).", "Unlike in supervised learning where priors ( i.e. , i ) can be easily obtained from human annotations, we cannot directly acquire them from distant annotations.", "In PU learning research, there are some methods proposed specifically for estimating the priors (Bekker and Davis, 2018; Jain et al., 2016; Du Plessis and Sugiyama, 2014).", "Here we adopt the most effective TIcE algorithm from Bekker and Davis (2018) to perform prior estimation.", "Peng et al. (2019) point out that a bounded loss function can help avoid overfitting in PU learning setting.", "We also confirm this argument in our empirical studies.", "Thus, instead of using the common unbounded cross entropy loss function, we adopt the mean absolute error (MAE) as the loss function for Conf-MPU and other PU learning methods in our experiments.", "Given its label y in the one-hot form, the loss on a token x is defined by: ( f ( x ) , y ) = 1 k + 1 k (cid:88) i =0 | y ( i ) f ( x ) ( i ) | , where f ( x ) is the softmax output, and both y and f ( x ) are in k +1 dimensions.", "Note that ( f ( x ) , y ) [0 , 2 k +1 ] is bounded.", "In DS-NER tasks, self-training strategies as postprocessing can often further improve the performance, such as iteratively enriching dictionaries based on the model predictions (Peng et al., 2019), or iteratively training a teacher-student framework (Liang et al., 2020).", "The discussion for self-training framework is out of the scope of this paper and we refer the readers to Zoph et al. (2020) for more information.", "We consider two benchmark NER datasets from different domains: (1) BC5CDR comes from biomedical domain.", "It consists of 1,500 articles, containing 15,935 Chemical and 12,852 Disease mentions; (2) CoNLL2003 is a well-known open-domain NER dataset.", "It consists of 1,393 English news articles, containing 10,059 PER , 10,645 LOC , 9,323 ORG and 5,062 MISC mentions.", "We obtain the following distantly labeled datasets: (1) BC5CDR (Big Dict) is labeled using a dictionary 2 released by Shang et al. (2018); (2) BC5CDR (Small Dict) is labeled using a smaller dictionary constructed by selecting only the first 20% entries from the previous one; (3) CoNLL2003 (KB) 3 is labeled by the knowledge base Wikidata and released by Liang et al. (2020); (4) CoNLL2003 (Dict) is labeled using a refined dictionary released by Peng et al. (2019) 4 .", "For dictionary labeling, we use the strict string matching algorithm presented in Peng et al. (2019).", "The process of knowledge base labeling can be found in Liang et al. (2020).", "All DS-NER methods are trained on the same distantly labeled training data and evaluated on the released human-annotated test sets in terms of span-level precision, recall and F1 score.", "To avoid the noise induced by the position tag in the distant labels, we do not consider the position of 2 https://github.com/shangjingbo1226/AutoNER 3 https://github.com/cliang1453/BOND 4 https://github.com/v-mipeng/LexiconNER 7202 each token in a named entity.", "During the prediction phase, a continuous span with the same label is considered as a single entity.", "We compare the proposed Conf-MPU with different groups of baseline methods.", "Fully Supervised Methods.", "We present the state-of-the-art (SOTA) performance of fully supervised methods on the two benchmark datasets, Wang et al. (2021) on BC5CDR and Wang et al. (2020) on CoNLL2003.", "For SOTA methods, we report the results from their original papers.", "We also evaluate the employed BiLSTM and BERT models in fully supervised setting.", "The performance in this group serves as upper-bound references.", "Distantly Supervised Methods.", "We consider the following distantly supervised NER methods: (1) Dict/KB Matching distantly labels the test sets using dictionaries or knowledge bases directly, which is included here as references; (2) AutoNER (Shang et al., 2018) trains the model using a tie-or-break mechanism to detect entity boundaries and then predicts entity type for each candidate; (3) BERT-ES (Liang et al., 2020) adopts early stopping to prevent BERT from overfitting to noisy distant labels; (4) BNPU (Peng et al., 2019) built on LBiLSTM ( BNPU LBiLSTM ) applies a binary PU learning risk estimation with MAE as the loss function to each entity type and then infers the final types; (5) MPU is the predecessor of the proposed Conf-MPU, which computes the empirical risk using Eq.", "5.", "We also build MPU on both BERT and LBiLSTM models, denoted as MPUBERT and MPU LBiLSTM .", "Note that full models in Peng et al. (2019); Liang et al. (2020) contain self-training as post processing steps, which are omitted here.", "We focus on the evaluation of how well each model can handle incomplete labeling issues in DS-NER tasks.", "To evaluate the efficacy of the DS-NER methods in real usage under distantly supervised settings, we do not use any human-annotated validation or test sets in any stage of the training process.", "The training stopping criteria are set as follows: 100 epochs for BiLSTM-based methods and 5 epochs for BERT-based ones.", "We report the performance of the final model instead of the best checkpoint.", "Consequently, the baselines have different performance from their re-Dataset Type Precision Recall BC5CDR (Big Dict) Chemical 97.99 63.14 Disease 98.34 46.73 BC5CDR (Small Dict) Chemical 98.66 11.43 Disease 99.25 9.31 CoNLL2003 (KB) PER 82.36 82.11 LOC 99.98 65.20 ORG 90.47 60.59 MISC 100.00 20.07 CoNLL2003 (Dict) PER 99.78 79.10 LOC 97.56 34.69 ORG 95.80 65.47 MISC 99.24 57.22 Table 1: The quality of distant labels on training sets, stated in token-level precision and recall (in %).", "ported results.", "We use the released code for AutoNER and BERT-ES to reproduce their results.", "For other methods, we report the results based on our implementations.", "BiLSTM-based models utilize pretrained bio-embedding 5 for BC5CDR and pretrained Stanford's Glove 6 embedding for CoNLL2003.", "BERT-based models use pretrained biobert-base-cased-v1.1 7 for BC5CDR and bert-base-cased 8 for CoNLL2003.", "The only exception is that BERT-ES uses roberta-base 9 for CoNLL2003 in the original implementation.", "We first examine the quality of the distantly labeled training data.", "Table 1 shows the detailed evaluation of distantly labeled training data.", "The results validate the assumption mentioned in previous work that distant labels generated by dictionaries are often of high precision but low recall.", "Table 2 presents the overall span-level precision, recall, and F1 scores for all methods on the test sets.", "The proposed Conf-MPU shows a clear advantage over baseline methods, especially when accompanying with LBiLSTM.", "Almost all distantly supervised baselines perform better than Dict/KB Matching on these four datasets, except for a few cases of BNPU and MPU which will be discussed later.", "Among the baseline methods, AutoNER and BERT-ES show strong correlation with respect to the dictionary quality.", "On BC5CDR (Small Dict), where the dictionary suffers from extremely low coverage, the two methods have little improvement 5 https://github.com/shangjingbo1226/AutoNER 6 https://nlp.stanford.edu/projects/glove/ 7 https://huggingface.co/dmis-lab/biobert-base-cased-v1.1 8 https://huggingface.co/bert-base-cased 9 https://huggingface.co/roberta-base 7203 Method BC5CDR (Big Dict) BC5CDR (Small Dict) CoNLL2003 (KB) CoNLL2003 (Dict) Fully Supervised Existing SOTA 90.99 (-/-) 94.60 (-/-) BERT 83.88 (79.75/88.46) 89.03 (88.00/90.08) BiLSTM 75.60 (71.27/80.49) 86.19 (84.06/88.42) Distantly Supervised Dict/KB Matching 64.32 (86.39/51.24) 15.69 (80.02/8.70) 71.40 (81.13/63.75) 63.93 (93.12/48.67) AutoNER 79.99 (82.63/77.52) 20.66 (81.47/11.83) 67.80 (73.10/63.22) 61.19 (82.87/48.50) BERT-ES 73.66 (80.43/67.94) 17.21 (75.60/9.71) 72.15 (81.38/64.80) 63.68 (85.77/50.63) BNPU LBiLSTM 59.24 (48.12/77.06) 70.21 (64.93/76.43) 78.44 (74.38/82.97) 76.11 (73.68/78.70) MPUBERT 68.22 (56.50/86.05) 73.91 (70.08/78.18) 65.75 (58.79/74.58) 67.65 (63.63/72.22) MPU LBiLSTM 60.79 (48.28/82.06) 73.25 (67.50/80.07) 69.13 (59.46/82.54) 71.41 (63.41/81.71) Conf-MPU BERT 77.22 (69.79/86.42) 71.85 (81.02/64.54) 79.16 (78.58/79.75) 81.89 (81.71/82.08) Conf-MPU LBiLSTM 80.07 (76.63/83.82) 76.18 (82.66/70.64) 80.02 (77.39/82.84) 83.34 (85.79/81.02) Table 2: The span-level results on test sets: F1 score (Precision/Recall) (in %), where the bests are in bold.", "on recall.", "Similarly, this phenomenon can also be observed by comparing their performance on CoNLL2003 (KB) and CoNLL2003 (Dict).", "By contrast, all PU learning based methods demonstrate significantly higher recall on all datasets, showing more robustness to the issue of incomplete labeling.", "However, we can observe that compared with their performances on BC5CDR (Small Dict), BNPU and MPU suffer from low precision on BC5CDR (Big Dict) labeled by a dictionary with high coverage and precision, which is against one's intuition.", "We will extend the discussion in Section 5.2.3.", "Note that although BNPU and MPU are derived from the same probability princi-ple, it is not necessary for BNPU and MPU to have similar performance.", "For example, BNPU LBiLSTM and MPU LBiLSTM perform similarly on BC5CDR datasets, but not on CoNLL2003 datasets.", "We suspect the cause is that they differ in the training process.", "BNPU is trained with the one-vs-all strategy where the distribution of unlabeled data is different for each entity type, while MPU is simultaneously trained with all types keeping the same distribution of unlabeled data.", "As mentioned earlier, the distribution of unlabeled data may significantly affect the risk estimation of PU learning.", "In addition, the inference step also has an unpredictable effect on the overall performance of BNPU.", "As the results manifest, Conf-MPU can significantly improve precision compared with BNPU and MPU, and meanwhile maintain a high level of recall on all datasets, which shows that Conf-MPU can significantly alleviate the estimation bias.", "For Conf-MPU, another factor for its performance is the confidence score estimation for each token being an entity token.", "To evaluate the quality of the confidence scores, we first convert the results as labels where if ( x ) > 0 .", "5 then label x as an 7204 Method BC5CDR (Big Dict) BC5CDR (Small Dict) Fully Supervised 85.00 (77.68/93.83) Binary PU 72.38 (61.89/87.17) 79.14 (81.85/76.60) Method CoNLL2003 (KB) CoNLL2003 (Dict) Fully Supervised 96.09 (92.77/99.65) Binary PU 88.88 (81.04/98.39) 88.08 (79.09/99.38) Table 3: The results of confidence score estimation on test sets: F1 score (Precision/Recall) (in %).", "entity token, otherwise label as a non-entity token.", "We present the results in terms of token-level F1 score, precision, and recall in Table 3, where Fully Supervised using human-annotated ground-truth labels provides upper-bound references for this estimation, while Binary PU uses distant labels.", "We can see that the classifier with the binary PU risk estimation achieves good recall on all of the distantly labeled datasets.", "High recall indicates that the classifier can recognize most of entity tokens, which can be taken advantage of in the Conf-MPU risk estimation to avoid overfitting to false negative samples in unlabeled data.", "We also evaluate the proposed Conf-MPU models with the confidence scores given by the fully supervised classifier on the four distantly labeled datasets, where the performances increased by 2 5 percentage in terms of F1 score, indicating that the proposed Conf-MPU framework is robust to the confidence score estimation of lesser quality.", "We leave for future work the optimization of the confidence score estimation.", "To have a solid recognition to the estimation bias in BNPU and MPU caused by the violation of PU assumption, we construct a series of dictionaries with different coverage on entities.", "We treat the dictionaries used to generate labels for BC5CDR (Big Dict) and CoNLL2003 (Dict) as two reference standard dictionaries.", "Then for each of the two benchmark datasets we build a group of dictionaries by selecting the first 20, 40, 60, 80, and 100 (%) entries from the standard ones.", "We train BNPU, MPU, and Conf-MPU on the distantly labeled datasets generated by these dictionaries.", "Here we show the results based on LBiLSTM in Figure 2 ( a c and f h on corresponding test sets).", "Similar trend can be observed on BERT-based settings.", "We can see a clear decreasing trend on precision for BNPU and MPU when dictionary size increases (Figures 2 ( a&f )).", "These phenomena are caused due to the violation of PU assumption.", "When a dictionary has higher coverage, the distribution of unlabeled data is more and more similar to the Method LBiLSTM BiLSTM BNPU 70.21 (64.93/76.43) 63.37 (57.92/69.97) MPU 73.25 (67.50/80.07) 62.39 (56.50/69.66) Conf-MPU 76.18 (82.66/70.64) 68.11 (71.68/64.88) Table 4: Ablation study on lexicon features on BC5CDR (Small Dict).", "distribution of true negative data, instead of to the overall data distribution.", "The BNPU and MPU risk estimations bring higher bias, leading to lower precision.", "Although their recalls remain high, the F1 scores still decrease.", "By contrast, the proposed Conf-MPU can effectively avoid this limitation and achieve good performance for all dictionary sizes.", "To further evaluate the Conf-MPU risk estimation, we first conduct ablation studies comparing with MPN risk (Eq. 3) whose estimation simply treats unlabeled data as negative samples in distant supervision, and demonstrate the performance with different epochs.", "Sub-figures ( d , e , i , j ) in Figure 2 show the trends of F1 scores of LBiLSTM-based models on the validation sets using three risk estimations of Conf-MPU, MPU, and MPN, with respect to different number of epochs.", "( d , e ) and ( i , j ) reflect the performance on BC5CDR and CoNLL2003, respectively.", "( d , i ) and ( e , j ) reflect the performance based on 20% and 100% dictionaries, respectively.", "The results show that MPN risk estimation can lead to severe overfitting for the model when dictionaries have low coverage.", "Although MPN still causes overfitting on full dictionaries, its performances are more stable and generally good.", "By contrast, MPU and Conf-MPU consider the false negative issue during training, and do not overfit even on small dictionaries.", "Conf-MPU performs stably and consistently well for more scenarios comparing with the other risk estimations.", "From Table 2, we can observe that Conf-MPU LBiLSTM outperforms BiLSTM with fully supervised setting on BC5CDR with both big and small dictionaries.", "To examine the performance gain, we implement three methods, BNPU, MPU, and Conf-MPU based on BiLSTM instead of LBiLSTM to evaluate the impact of lexicon features learned from the dictionaries.", "The results on BC5CDR (Small Dict) are shown in Table 4.", "We can see that the lexicon features used in DS-NER tasks can significantly improve the performance.", "The experiments performed on other distantly labeled datasets also exhibit similar trends.", "The results suggest that dic-7205 tionaries in DS-NER tasks can also serve as external features in additional to the distant labels.", "DS-NER.", "Handling noisy labels (false positives and false negatives) in DS-NER has attracted extensive attention in recent years (Yang et al., 2018; Shang et al., 2018; Mayhew et al., 2019; Cao et al., 2019; Peng et al., 2019; Liang et al., 2020; Liu et al., 2021; Zhang et al., 2021).", "Here we briefly discuss a few representative approaches.", "One line of work focuses on alleviating the impact of false negatives (or incomplete labeling).", "Our work belongs to this line.", "AutoNER (Shang et al., 2018) proposes a new tagging scheme to identify entity candidates by determining if the connection of two adjacent tokens should be tied , broken , or unknown , and then decides the type for entity candidates.", "To handle incomplete labeling, tokens with unknown tag are not counted for the loss calculation.", "Mayhew et al. (2019) introduce a constraint driven iterative algorithm learning to detect false negatives in the noisy data and down-weigh them, resulting in a weighted training set on which a weighted NER model is trained.", "Peng et al. (2019) employ PU learning to avoid the model overffiting to false negatives, and propose a bounded non-negative positive-unlabeled learning.", "However, the application of binary PU learning in DS-NER is limited to the underlying PU assumption and its efficiency.", "Our proposed Conf-MPU can release the limitations and allow PU learning to be utilized in wider distant supervision scenarios.", "Another line of work considers the noise of both types, either explicitly or implicitly.", "Cao et al. (2019) design a data selection scheme to compute scores for annotation confidence and annotation coverage to distinguish high-quality sentences from noisy ones, and then propose a name tagging model that consists of two modules of sequence labeling and classification, focusing on high-quality and noisy portions, respectively.", "BOND (Liang et al., 2020), leveraging the power of pre-trained language model BERT, first adopts early stopping to prevent overfitting to noisy labels and obtains an initialized model, then further boosts the performance by a teacher-student self-training framework.", "Liu et al. (2021) propose a calibrated confidence estimation approach for DS-NER and integrate it in an LSTM-CRF model under a self-training framework to reduce the impact of noise.", "Zhang et al. (2021) study the noise in DS-NER from a novel perspective of dictionary bias.", "Specifically, they first formulate DS-NER using a structural causal model, then identify the causes of both false positives and false negatives, and finally de-bias via backdoor adjustment and causal invariance regularizer.", "We leave for future work enabling Conf-MPU to handle false positives.", "PU Learning.", "PU learning learns a classifier from positive and unlabeled data (Elkan and Noto, 2008; Du Plessis et al., 2014).", "In a broad sense, PU learning belongs to semi-supervised learning.", "However, there is a fundamental difference between them: semi-supervised learning requires labeled negative data, but PU learning does not.", "Recently, a few works significantly enriched PU learning theory.", "Kiryo et al. (2017) propose a non-negative risk estimator for PU learning, which enables the usage of deep neural networks for classification given limited labeled positive data.", "Xu et al. (2017) first come up with the concept of multi-positive and unlabeled learning with a margin maximization goal for the multi-class classification problem.", "However, the objective of margin maximization cannot be easily extended to apply on popular deep learning architectures.", "Hsieh et al. (2019) propose a novel classification framework incorporating biased negative data in PU learning, which opens up a wider range of the applications of PU learning.", "In this paper, we present a novel multi-class positive and unlabeled learning method called Conf-MPU for the DS-NER task.", "Conf-MPU estimates the empirical classification risks using the confidence estimation of a token being an entity token on the distantly labeled training data to prevent the model from overfitting to the false negatives.", "We empirically show that Conf-MPU can significantly reduce the potential risk estimation bias caused by PU assumption.", "The extensive experiments illustrate that compared with existing DS-NER methods, Conf-MPU is more robust to various types of dictionaries and can handle the incomplete labeling problem effectively.", "The work is supported in part by NIFA grant no. 2022-67015-36217 from the USDA National Institute of Food and Agriculture." ]
[ "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "objective", "objective", "abstain", "objective", "abstain", "result", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "method", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "method", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "method", "result", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "method", "other", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "other", "other", "abstain", "other", "abstain", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "abstain", "result", "abstain", "other" ]
[ "Abstractive summarization models are commonly trained using maximum likelihood estimation, which assumes a deterministic (one-point) target distribution in which an ideal model will assign all the probability mass to the reference summary.", "This assumption may lead to performance degradation during inference, where the model needs to compare several system-generated (candidate) summaries that have deviated from the reference summary.", "To address this problem, we propose a novel training paradigm which assumes a non-deterministic distribution so that different candidate summaries are assigned probability mass according to their quality.", "Our method achieves a new state-of-the-art result on the CNN/DailyMail (47.78 ROUGE-1) and XSum (49.07 ROUGE-1) datasets.", "Further analysis also shows that our model can estimate probabilities of candidate summaries that are more correlated with their level of quality.", "1 1 Introduction Neural methods for abstractive summarization (Rush et al., 2015; Nallapati et al., 2016; Chopra et al., 2016; Lewis et al., 2020; Zhang et al., 2020) formulate summarization as a sequence-to-sequence (Seq2Seq) problem (Sutskever et al., 2014), learning to generate the summary in an autoregressive manner.", "Such models are commonly trained with maximum likelihood estimation (MLE), maximizing predictive probability of the reference output given the gold sub-sequence before it.", "However, during inference the model must also generate the output based on possibly erroneous previous steps.", "This can hurt model performance, a phenomenon often called exposure bias (Bengio et al., 2015; Ranzato et al., 2016).", "To maintain reasonable performance even in the case of a sub-sequence with errors, we argue that the 1 We have made our code, results, and trained models publicly available at https://github.com/yixinL7/BRIO.", "model must accurately estimate relative quality of different generated outputs, since effective inference requires comparison among these candidates.", "To understand whether existing models can accurately perform such relative comparisons, we conducted a preliminary study on pre-trained BART (Lewis et al., 2020), first generating two candidate summaries from the model and observing whether a higher probability is assigned to the candidate with a higher ROUGE (Lin, 2004) score.", "As Tab.", "1 shows, the accuracy is far from ideal.", "This is likely due to the fact that MLE training only encourages the model to assign high probability to the reference summary, and is agnostic about any relative comparison between non-reference summaries.", "However, we argue that it is also important for the order of model scores to be coordinated with the actual quality metrics by which the summaries will be evaluated higher model scores should indicate better quality summaries.", "In the following we will refer to models that have such scores as coordinated for conciseness.", "We introduce a training paradigm which requires the abstractive model to be able to be accurate with respect to predicting the tokens in the reference summaries and coordinated with respect to 2890 Encoder Decoder Source Input Reference Output Encoder Decoder () () () () Source Input Candidate Output Decoder () () () () Candidate Output () () > Seq2Seq Generation Model Reference-free Evaluation Model Figure 1: Comparison of MLE loss ( LMLE ) and the contrastive loss ( L Ctr ) in our method.", "the candidate summaries.", "In other words, we give the abstractive model a dual role: as a generation model, it generates the output summaries in an autoregressive way; as an evaluation model, it can be used to score the quality of candidate summaries by estimating a probability distribution over candidate outputs.", "The generation model is trained using the standard MLE loss, but to train the evaluation model we introduce a contrastive loss (Hadsell et al., 2006) defined over different candidate summaries generated by pre-trained abstractive models (Fig. 1), following previous work on ranking-based or contrastive learning (Hopkins and May, 2011; Zhong et al., 2020; Liu et al., 2021b).", "Our main contribution is to change the target distribution of abstractive models from a one-point deterministic distribution assumed by MLE training to a non-deterministic distribution in which candidate summaries are also assigned probability mass according to their quality.", "The new SOTA performance on CNN/DailyMail (Hermann et al., 2015) and XSum (Narayan et al., 2018) datasets demonstrated the effectiveness of our method.", "Our in-depth analysis also found that the abstractive models trained using our method can estimate the candidate summary quality more accurately, in concert with the the objective of our training paradigm.", "The goal of abstractive summarization is to create a function g that takes a source document D and generates an appropriate summary S", "Training Objective Neural abstractive summarization models aim to learn a neural model g that results in good summaries.", "Maximum likelihood estimation (MLE) is the standard training algorithm.", "It aims to maximize the likelihood of the reference summary S , i.e., = argmax (cid:88) i log p g ( S ( i ) | D ( i ) ; ) (2) where denotes the parameters of g and p g denotes the probability distribution entailed by these parameters.", "The summation is over the training set and { D ( i ) , S ( i ) } is the i -th training sample.", "For a specific sample { D ( i ) , S ( i ) } , Eq.", "2 is equivalent to minimizing the sum of negative log-likelihoods of the tokens { s 1 , , s j , , s l } in the reference summary S whose length is l , which is the cross-entropy loss: L xent = l (cid:88) j =1 (cid:88) s p true ( s | D, S <j ) log p g ( s | D, S <j ; ) (3) where S <j denotes the partial reference sequence { s 0 , , s j 1 } and s 0 is a pre-defined start token.", "p true is a one-hot distribution under the standard MLE framework: p true ( s | D, S <j ) = (cid:40) 1 s = s j 0 s (cid:54) = s j (4) In practice, label smoothing (Szegedy et al., 2016) is a widely used and effective technique that modi-fies the target distribution in Eq.", "4 to a \"soft\" label by assigning probability mass to other tokens: p true ( s | D, S <j ) = (cid:40) 1 s = s j N 1 s (cid:54) = s j (5) where N is the size of the dictionary.", "Inference and Exposure Bias During inference, the abstractive model g is used to generate the candidate summary in an autoregressive manner.", "It is intractable to enumerate all the possible candidate outputs, so in practice methods such as beam search are used to reduce the search space.", "One important step in search is estimating the probability of the next word s t given the previous predicted sequence S <t :", "Comparing Eq.", "6 with Eq.", "3, the major difference is that during inference the model makes new predictions based on its own previous predictions S <t instead of the reference S <t .", "As a result, even if the generation model g achieves very high accuracy", "w.r.t. Eq.", "3, once S <t starts to deviate from S , there is the risk that the performance of g will significantly degrade.", "This problem has been identified as the exposure bias (Bengio et al., 2015).", "Eq.", "6 implies that the abstractive model g should be able to assign higher estimated probability to the better candidate summary during inference.", "However, this intuition is not directly captured in the standard MLE objective used in training a model obtaining zero MLE loss would assign zero probability to any candidate summary different from the reference.", "This is obviously improper for any task where multiple reasonable generations may exist (Khayrallah et al., 2020), and also does not say anything about the ordering of two imperfect references.", "We therefore advocate for making the alternative assumption that the probability of one candidate should be well-correlated with its quality as evaluated by an automatic metric M .", "Since it is intractable to enumerate all the possible candidate outputs, we only require our model to be able to accurately predict the ranking order of a set of the most probable candidate summaries S , which are its own beam search results.", "In order to achieve this objective, we slightly modify the conditions of Eq.", "5, maintaining the general functional form, but instead specifying the marginal probability of the non-reference candidates S to be , and encouraging coordination of probabilities and qualities among non-reference candidates as follows: p true ( S | D ) = 1 S = S (cid:80) S S p true ( S | D ) = S (cid:54) = S p true ( S i | D ) > p true ( S j | D ) S i , S j S , M ( S i ) > M ( S j ) (7) We next describe precisely how we encourage coordination through contrastive learning .", "Contrastive Learning for Coordination The candidate quality measure M can be defined in many ways.", "In this work we define it as the ROUGE (Lin, 2004) score of a candidate summary S i given the reference summary S .", "To coordinate a pre-trained abstractive model, we 1) use it to generate different candidate summaries with various levels of quality, 2 then 2) encourage the model to assign higher estimated probabilities to better candidates by fine-tuning the model with a contrastive loss, following the previous work (Hopkins and May, 2011; Zhong et al., 2020): L ctr = (cid:88) i (cid:88) j>i max(0 , f ( S j ) f ( S i ) + ij ) (8) where S i and S j are two different candidate summaries and ROUGE ( S i , S ) > ROUGE ( S j , S ) , i, j, i < j .", "ij is the margin multiplied by the difference in rank between the candidates, i.e., ij = ( j i ) .", "f ( S i ) is the length-normalized estimated log-probability 3 f ( S ) = (cid:80) lt =1 log p g ( s t | D, S <t ; ) | S | (9) where is the length penalty hyperparameter.", "This loss gives the abstractive model a dual purpose, first as a reference-free evaluation model, which can be used in a two-stage summarization pipeline, where it is used to score the candidates generated by a pre-trained generation model and select the final output from them.", "However, since the autoregressive generation depends on both the token-level prediction accuracy and sequence-level coordination , the model fine-tuned with the contrastive loss alone can no longer be used as a generation model.", "Multi-task Fine-tuning Following Edunov et al. (2018), we combine the contrastive (Eq. 8) and cross-entropy (Eq. 3) losses to preserve the generation ability of the pre-trained abstractive model: L mul = L xent + L ctr (10) where is the weight of the contrastive loss.", "We note that the contrastive and the cross-entropy loss can effectively complement each other since the contrastive loss is defined on the sequence level, the token-level cross-entropy loss serves as a normalization to ensure that the model could assign balanced probability mass across the whole sequence.", "Training Methods of Seq2Seq Models In order to align the training objective and evaluation metric, structured losses have been used for the Seq2Seq model training.", "Among them, margin-based losses (Herbrich et al., 1999; Taskar et al., 2004; Gimpel and Smith, 2010), which require the model to assign higher probability to the better output, are a major category.", "Many margin-based losses used in modern seq2seq models (Wiseman and Rush, 2016; Edunov et al., 2018) assume a deterministic (one-point) distribution: a model can achieve zero loss if it can assign a much higher probability to the (pseudo)-reference, regardless of relative comparisons of other candidate summaries.", "By contrast, our method has a non-deterministic assumption (Eq. 7), which focuses on the pair-wise ranking of a set of candidate summaries.", "One main challenge of directly optimizing a Seq2Seq model with quality scores of the output is that the discrete sampling process makes the loss non-differentiable.", "To circumvent this problem, reinforcement learning has been used to reformulate the conditional text generation tasks (Ranzato et al., 2016; Bahdanau et al., 2016; Li et al., 2016; Paulus et al., 2018; Li et al., 2019).", "Compared to this school of methods, our method is based on supervised learning, and it is more stable and less sensitive to the design choices (e.g. reward shaping), which are well-known challenges of reinforcement learning methods.", "Minimum risk training (Shen et al., 2016; Wieting et al., 2019) and other online sampling based methods (Bengio et al., 2015; Norouzi et al., 2016; Zhang et al., 2019) belong to another school of methods used to circumvent the problem of non-differentiability.", "However, they also exhibit similar problems of stability as reinforcement learning.", "Contrastive Learning Recently, contrastive learning (Hadsell et al., 2006) has been introduced into several conditional text generation tasks, such as machine translation (Yang et al., 2019; Pan et al., 2021), text summarization (Cao and Wang, 2021; Xu et al., 2021; Sun and Li, 2021), and other tasks (Uehara et al., 2020; Cho et al., 2021; Lee et al., 2021b).", "Among these application scenarios, most work deployed contrastive learning in the latent representation space , following the framework proposed in Chen et al. (2020).", "However, in this work we adopt contrastive learning over the discrete space of the generated texts.", "Besides, instead of constructing the contrastive learning examples by rule-based methods (e.g. perturbing the reference output), we use the generation models to construct the examples, which makes the contrastive learning task closer to the generation task.", "Sun and Li (2021) also adopted contrastive learning on the generated texts.", "However, their formulation belongs to the margin-based losses.", "We have discussed the difference between our method and the margin-based losses in the previous paragraphs.", "Discriminative Reranking Discriminative reranking has been widely studied for conditional generation tasks (Shen et al., 2004; Och et al., 2004; Wan et al., 2015; Mizumoto and Matsumoto, 2016).", "Some recent works (Liu and Liu, 2021; Lee et al., 2021a) have also explored discriminative reranking of candidates from neural natural language generation models, which adopt large pre-trained language models (e.g. BERT (Devlin et al., 2019)) as the reranker.", "In this work, we factorize the Seq2Seq model (e.g., BART) trained on the same dataset as the reranking model, which maximizes the parameter sharing across two stages.", "Besides, our approach contributes an instance of leveraging large pre-trained Seq2Seq models as a quality estimation model (Yuan et al., 2021).", "experiments (statistics in Appendix A).", "CNNDM 4 (Hermann et al., 2015) is a large scale news dataset.", "Following Nallapati et al. (2016), we treat the news articles as the source documents and the associated highlights as the summaries.", "XSum 5 (Narayan et al., 2018) is a highly abstractive dataset of articles from the British Broadcasting Corporation (BBC).", "NYT 6 (Sandhaus, 2008) contains articles from the New York Times and the associated summaries.", "We follow Kedzie et al. (2018) for data preprocessing and splitting, and use the associated archival abstracts as the summaries.", "Baselines We choose a variety of related models with strong performance as baselines.", "BART (Lewis et al., 2020) and PEGASUS (Zhang et al., 2020) are both large pre-trained Seq2Seq LMs standard in the literature.", "GSum (Dou et al., 4 https://cs.nyu.edu/~kcho/DMQA/ 5 https://github.com/EdinburghNLP/XSum 6 https://catalog.ldc.upenn.edu/LDC2008T19 2893 2021) is built on BART, and improves performance by using additional guidance from an extractive summarizer.", "SimCLS (Liu and Liu, 2021) introduces a two-stage framework where the pre-trained BART model is used to generate candidates and a pre-trained RoBERTa (Liu et al., 2019) model is fine-tuned as an evaluation model to score the candidate summaries and select from them.", "It achieves state-of-the-art performance on both CNNDM and XSum .", "GOLD (Pang and He, 2021) uses offline reinforcement learning to train the BART model by treating the reference summaries as the demonstrations, a different formulation that can also improve the performance of the original BART.", "SeqCo (Xu et al., 2021) and ConSum (Sun and Li, 2021) are two recent methods that aim to leverage contrastive learning to improve the performance of the abstractive summarization model (BART).", "Implementation Details In the following experiments, we use either BART or PEGASUS as a backbone.", "We label our proposed methods BRIO , with two variants: (1) BRIO-Ctr is fine-tuned with the contrastive loss (Eq. 8) only; (2) BRIO-Mul is fine-tuned with the multi-task loss (Eq. 10).", "We use BRIO-Ctr as an evaluation model that scores different candidate summaries generated by a Seq2Seq abstractive model and selects the final output from them, and BRIO-Mul as a standard Seq2Seq model that takes the source documents as input and generates the output in an autoregressive manner.", "Further details are in Appendix B. 5.2 Results The results are shown in Tab 2.", "For CNNDM and NYT we use BART as the backbone model while for XSum we use the pre-trained PEGASUS model as our base model since it achieves better performance than BART.", "We have the following observations: (1) BRIO-Ctr outperforms SimCLS , its counterpart as an evaluation model in a two-stage summarization framework.", "Specifically, both BRIO-Ctr and SimCLS are used to score the candidate summaries generated by a Seq2Seq abstractive model (BART).", "The final outputs are selected based on those scores.", "We attribute BRIO-Ctr's superior performance to its use of the same model architecture (BART) for both candidate generation and scoring, while SimCLS uses RoBERTa as the evaluation model.", "As a result, BRIO-Ctr maximizes the parameter sharing between the two stages, and preserves the power of the Seq2Seq model pre-trained on the same dataset.", "(2) BRIO-Mul is able to establish the new stare-of-the-art performance on CNNDM .", "Notably, the previous state-of-the-art model, GSum , takes additional guidance as input and needs a separate encoder to encode the guidance information, while BRIO-Mul uses the same parameterization of BART.", "Compared to other methods (ConSum, SeqCo, GOLD) that aim to improve upon BART, BRIO-Mul performs much better, showing the effectiveness of our training method.", "(3) Since on XSum we use PEGASUS instead of BART as the base model, the result shows that our method is not restricted to the specific choice of the base model.", "We further perform some in-depth analyses from diverse perspectives on the CNNDM dataset to gain more insights into our proposed method.", "Coefficients of the Multi-Task Loss The multitask loss (Eq. 10) used to train our model contains two parts: the cross-entropy loss and the contastive loss.", "As shown in Tab.", "3, as the weight of the contrastive loss ( ) increases, the model's performance improves.", "However, the cross-entropy loss is still necessary to preserve the model's ability as a generation model.", "We argue that this is because the token level accuracy is still important during the autoregressive generation process, where the individual tokens are predicted sequentially.", "In addition, we also found that the model tends to achieve the best performance (w.r.t the ROUGE scores on the development set) faster with a higher .", "Specifically, it requires less than one entire epoch to achieve the best performance on CNNDM , making our approach an efficient fine-tuning method.", "erate, we can use it to generate a new set of candidates in the same way as we used the pre-trained BART model, and continue fine-tuning it on this newly created set of candidates (Och, 2003).", "Fig. 2 illustrates this iterative process.", "The results shown in Tab.", "4 illustrate that this new model (BRIO-Loop) outperforms BRIO-Mul.", "Besides, the model reached the best performance very quickly, showing the potential of adopting our method in an online framework where the new candidates are dynamically generated from the current model.", "We leave this direction for future work.", "Increasing the Beam Width While theoretically a larger beam width (i.e. the number of candidates maintained during beam search) would allow more candidates to be considered and therefore increase the upper bound of the performance, in practice model performance may be lower if the beam width is too large.", "The reason for this phenomenon is closely related to the low sequence-level coordination of the generator.", "Specifically, increasing the beam width may introduce candidates with lower quality (Stahlberg and Byrne, 2019), and the generator may not be able to differentiate them from high-quality candidates.", "In Tab.", "5, we compare the performance of the pre-trained BART and our model (BRIO-Mul) with different beam widths used during inference.", "We observe that the performance of BART goes down as the beam width increases.", "On the other hand, our model is able to achieve better performance with a larger number of beams, demonstrating that our training method can improve the coordination of the model by encouraging the model to assign estimated probabilities to candidate summaries well-correlated with their quality.", "Training with Different Evaluation Metrics In the previous experiments, we used ROUGE as the evaluation metric to define the target ordering of the candidate summaries (Eq.7).", "To evaluate our method's performance beyond ROUGE, 2895 System R-1 R-2 R-L BS BART 44.29 21.17 41.09 27.38 BRIO-Mul (R) 47.78 23.55 44.57 32.11 BRIO-Mul (B) 47.53 23.22 44.37 32.59 Table 6: Results on CNNDM using different evaluation metrics as M in Eq.7.", "we use a model-based semantic similarity metric, BERTScore (Zhang* et al., 2020), 7 as the evaluation metric M in Eq.7 to compare the performance of different candidate summaries.", "Then, we trained another version of BRIO-Mul based on the order of candidate summaries calculated by BERTScore.", "The results in Tab.", "6 show that (1) Our model can significantly improve the model performance when either ROUGE or BERTScore is used as the target evaluation metric for ordering candidate summaries.", "This suggests that it is possible to use our method to optimize any specific target metric, making our method an alternative to reinforcement learning or minimum risk training.", "(2) Our model that is trained on one evaluation metric (e.g. BERTScore) also achieves improvement on another metric (e.g. ROUGE) compared with the baseline model, which indicates that the improvement made by our model is not from exploiting the potential weaknesses of individual metrics.", "Besides, this result also demonstrates a non-trivial degree of agreement between ROUGE and BERTScore.", "Novel n -grams We compare the ratio of novel n -grams in reference, BRIO-Mul's, and BART's summaries.", "As Tab.", "7 shows, our model is more abstractive compared to BART, although reference summaries still contain more novel n -grams.", "This is likely due to the fact that our model is optimized at the sequence-level, allowing more free-dom for paraphrasing and compression.", "We further investigate the relation of the ab-stractiveness\" and model performance by com-7 https://github.com/Tiiiger/bert_score.", "paring our model (BRIO-Mul) with the baseline model (BART) on different buckets of test examples grouped by the novelty\" of the reference summaries, 8 i.e., Novelty( D, S ) = (cid:80) g GS 1 ( g / GD ) | GS | (11) where D and S are the source document and reference summary respectively, GD and GS are the sets of bigrams in D and S , 1 is the indicator function. The results in Fig. 3 show that when novelty is higher, (1) all models' performance decreases; (2) our model achieves larger improvement over the baseline model. Rank Correlation We computed the rank correlation between the estimated probabilities of the candidate summaries calculated by the generators and the quality scores of the candidate summaries. We use Eq. 9 to calculate the estimated probabilities 9 and we use ROUGE-1 as the quality score metric of the candidate summaries. We calculate 8 The calculation is performed using ExplainaBoard (Liu et al., 2021a). https://github.com/neulab/ExplainaBoard. 9 We found the value of the length penalty factor in Eq. 9 by maximizing the rank correlation on the validation set. 2896 Dataset System ECE Acc Conf CNNDM BART .4097 .3711 .7365 BRIO-Mul .2719 .4271 .6652 XSum PEGASUS .2369 .4688 .6990 BRIO-Mul .1423 .4744 .5881 Table 9: Expected Calibration Error (ECE), accuracy (Acc) and confidence (Conf) on the test set of CNNDM and XSum . 0.0 0.2 0.4 0.6 0.8 1.0 confidence 0.2 0.4 0.6 0.8 1.0 a cc u r a c y CNNDMBART CoordSum-Mul 0.0 0.2 0.4 0.6 0.8 1.0 confidence 0.2 0.4 0.6 0.8 1.0 a cc u r a c y XSum PEGASUS CoordSum-Mul Figure 4: Reliability graphs on the CNNDM and XSum datasets. The accuracy of model's predictions is plotted against the model's confidence on these predictions. Spearman's rank correlation for each sample, and use the average score as the overall correlation, We investigated two specific settings: 1) ranking candidate summaries generated by a different model (PEGASUS); 2) ranking candidate summaries generated by themselves (BART & BRIO-Mul). We use 16 candidates in total for calculation. As Tab. 8 shows, our model achieves better rank correlation on the candidate summaries generated by both itself and the independent model. This suggests that our model can better estimate the quality of candidate summaries. 5.4 Token-level Calibration Calibration requires that a model's confidence on its predictions is equal to the accuracy of these predictions (Guo et al., 2017). Previous work (Mller et al., 2019; Kumar and Sarawagi, 2019; Wang et al., 2020) has found that a more calibrated text generation model tends to have better performance, and techniques like label smoothing can improve both the token-level calibration and sequence-level accuracy (i.e. the ability of generating better results). One intuitive explanation of this phenomenon is to interpret the model's estimated probability of a generated summary as the product of the model's confidences on a series of token-level predictions. Then, since a more calibrated model's confidence estimates better the accuracy of its predictions, the model's estimated probability of one sequence should be more indicative of the quality of this sequence, which is essential for the beam search during inference. However, the relation of token-level calibration and sequence-level performance remains inconclusive (Mller et al., 2019). 10 For example, a generator that always predicts a uniform distribution over all tokens would be perfectly calibrated, however, such a model would not generate high-quality outputs. We investigate this relation from the opposite direction by evaluating whether our model (BRIO-Mul), which is trained to have better sequence-level performance, would also be more calibrated at the token-level compared with the baseline models that are trained using MLE and label smoothing. We follow previous work by using the Expected Calibration Error (Naeini et al., 2015) (ECE) as the evaluation metric of calibration: ECE = M (cid:88) m =1 | B m | n | acc ( B m ) conf ( B m ) | (12) where the samples are grouped into M equal-width buckets by confidence (conf), B m denotes the m -th bucket, and n is the total number of samples. Following Wang et al. (2020), we evaluate model calibration on the system-generated summaries during inference and use the tercom toolkit 11 to assign labels (correct/incorrect) to the system-generated summaries based on the reference summaries. The results in Tab. 9 show that BRIO-Mul is better calibrated compared to BART, suggesting that our method helps to improve the token-level calibration by explicitly encouraging the model to have more accurate sequence-level probability estimations. The reliability graph is shown in Fig. 4. We found that (1) abstractive models are generally over-confident on their own predictions, (2) models are generally more calibrated on XSum than CNNDM . This is likely due to the fact that XSum has shorter summaries therefore it is less likely to be affected by the exposure bias. 5.5 Few-shot Fine-tuning The training paradigm proposed in this paper may be extended to any Seq2Seq model. However, it can be a non-trivial overhead to generate the candidate summaries using large neural models on the entire training set. On the other hand, recent work (Raffel et al., 2020; Zhang et al., 2020; Schick and Schtze, 10 In general, better token-level calibration doesn't guarantee better sequence-level performance. 11 http://cs.umd.edu/~snover/tercom/ 2897 System Summary Reference chelsea forward tammy abraham nets first-half double for chelsea. dominic solanke adds a third late on as chelsea look set to win trophy. manchester city struggle without injured star thierry ambrose. read: mourinho warns his young chelsea players he can not play them all. click here to read our match report from man city 's academy stadium. BART tammy abraham scored twice in the first half to give chelsea the lead. isaac buckley-ricketts levelled the game for manchester city. dominic solanke scored late on to put a gloss on the scoreline. click here to read sportsmail's player ratings from the youth cup final. BRIO-Mul chelsea beat manchester city 3-1 in the youth cup final at the etihad stadium. tammy abraham scored twice in the first half to give chelsea the lead. dominic solanke scored late on to seal the win for the home side. Reference alejandro valverde won ahead of julian alaphilippe and michael albasini. chris froome finished 123rd after a crash during the final 12 kilometres. team sky's sports director gabriel rasch praised froome for finishing. rasch said froome was banged up' but expects to ride tour de romandie. BART movistar rider alejandro valverde won fleche wallonne on wednesday. team sky's chris froome fell in the final 12km but finished the race. philippe gilbert pulled out of the race after a bad crash 50km from the end. click here for more cycling news. BRIO-Mul alejandro valverde defended his fleche wallonne title in belgium on wednesday. movistar rider finished ahead of julian alaphilippe and michael albasini. team sky's chris froome fell in the final 12km of the race but finished in 123rd. froome was involved in a crash but finished the race despite being banged up' Reference manuel pellegrini won the premier league and capital one cup last season. city currently sit fourth in the league table 12 points behind chelsea. pellegrini's contract expires at the end of the 2015-16 season. city players have been impressed with vieira's work with the youth team. pep guardiola is city's first-choice to succeed pellegrini at the etihad. BART manuel pellegrini's future at manchester city is under scrutiny. patrick vieira is highly-respected among the city players. city's first-choice managerial option is bayern munich boss pep guardiola. click here for all the latest manchester city news. click here for more premier league news. BRIO-Mul manchester city players have backed patrick vieira to replace manuel pellegrini as manager of the club. the frenchman is highly-respected among the players at the etihad stadium. pellegrini's future at the club is under scrutiny after a disappointing season. city's first-choice manager is current bayern munich boss pep guardiola. Table 10: Case Study on CNNDM . BRIO-Mul learns to ignore the noise pattern (click here\") while BART cannot. Dataset System R-1 R-2 R-L CNNDM BART 44.29 21.17 41.09 BRIO-Few 45.81 21.91 42.61 XSum PEGASUS 47.46 24.69 39.53 BRIO-Few 47.95 24.89 39.71 Table 11: Few-shot Fine-tuning. BRIO-Few is trained on only 100/1000 training examples on CNNDM and XSum respectively. R-1/2/L are ROUGE-1/2/L F 1 scores. 2021; Fabbri et al., 2021) has shown that few-shot learning can be an effective fine-tuning method of pre-trained models for text generation tasks. Therefore, we investigate our model's performance in a few-shot setting. Specifically, we randomly sample 100/1000 examples from the training set of CNNDM / XSum , and fine-tune the models that are pre-trained using MLE loss on those examples. More training details can be found in Appendix C. The results are shown in Tab. 11. All experiments are repeated three times, and the reported results are the average performance. The results indicate that our model can achieve improvement over the baseline model under the few-shot learning setting with a small computational overhead. 5.6 Case Study on CNNDM Tab. 10 presents an interesting pattern we observed when comparing the results of BRIO-Mul and BART, which demonstrates that our method helps the abstractive model to filter out noise patterns in the original data. Specifically, some of the reference summaries (331/11490) in CNNDM contains the phrase click here, pointing to a hyperlink, and 103 source documents also contain this phrase.", "BART picked up this pattern, and generates this phrase in 96 output summaries.", "On the contrary, our model learns to ignore this noise pattern and never generated it across the whole test set, likely because it identified that generated candidates with this pattern rarely achieve a high ROUGE score, and downweighted the probability accordingly.", "In this work, we presented a new training paradigm that assigns candidate outputs probability mass according to their quality using contrastive learning.", "While our method has achieved significant improvement on abstractive summarization, we note several directions for the future work to explore.", "First, since our method makes no assumptions specifi-cally about the summarization task, it can be extended to other conditional text generation tasks such as machine translation.", "Second, it is possible to apply our method in a reinforcement learning setting, where the candidate summaries are dynamically generated.", "Finally, in experiments we only used diverse beam search to generate the candidate summaries, but it is likely that other candidate generation methods could yield further improvements.", "We thank the anonymous reviewers for valuable feedback and helpful suggestions." ]
[ "abstain", "abstain", "objective", "objective", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "objective", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "result", "objective", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "other", "other", "other", "objective", "other", "other", "method", "other", "other", "other", "other", "method", "method", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "result", "objective", "objective", "objective", "method", "method", "other" ]
[ "When answering a question, people often draw upon their rich world knowledge in addition to the particular context.", "Recent work has focused primarily on answering questions given some relevant document or context, and required very little general background.", "To investigate question answering with prior knowledge, we present COMMONSENSEQA: a challenging new dataset for commonsense question answering.", "To capture common sense beyond associations, we extract from CONCEPTNET (Speer et al., 2017) multiple target concepts that have the same semantic relation to a single source concept.", "Crowd-workers are asked to author multiple-choice questions that mention the source concept and discriminate in turn between each of the target concepts.", "This encourages workers to create questions with complex semantics that often require prior knowledge.", "We create 12,247 questions through this procedure and demonstrate the difficulty of our task with a large number of strong baselines.", "Our best baseline is based on BERT-large (Devlin et al., 2018) and obtains 56% accuracy, well below human performance, which is 89%.", "When humans answer questions, they capitalize on their common sense and background knowledge about spatial relations, causes and effects, scientific facts and social conventions.", "For instance, given the question Where was Simon when he heard the lawn mower? , one can infer that the lawn mower is close to Simon, and that it is probably outdoors and situated at street level.", "This type of knowledge seems trivial for humans, but is still out of the reach of current natural language understanding (NLU) systems.", "river waterfall bridge valley", "a) Sample ConceptNet for specific subgraphs", "b) Crowd source corresponding natural language questions and two additional distractors Where on a river can you hold a cup upright to catch water on a sunny day?", "waterfall, bridge, valley, stream , bottom AtLocation I'm crossing the river , my feet are wet but my body is dry, where am I?", "waterfall, bridge, valley, bank , island pebble stream bank AtLocation A t L o c a t i o n canyon AtLocation AtLocation A t L o c a t i o n AtLocation Figure 1:", "waterfall, bridge, valley, pebble , mountain Where can I stand on a river to see water falling without getting wet?", "Work on Question Answering (QA) has mostly focused on answering factoid questions, where the answer can be found in a given context with little need for commonsense knowledge (Hermann et al., 2015; Rajpurkar et al., 2016; Nguyen et al., 2016; Joshi et al., 2017).", "Small benchmarks such as the Winograd Scheme Challenge (Levesque, 2011) and COPA (Roemmele et al., 2011), targeted common sense more directly, but have been difficult to collect at scale.", "Recently, efforts have been invested in developing large-scale datasets for commonsense reasoning.", "In SWAG (Zellers et al., 2018b), given a textual description of an event, a probable subsequent event needs to be inferred.", "However, it has been quickly realized that models trained on large amounts of unlabeled data (Devlin et al., 2018) capture well this type of information and performance on SWAG is already at human level.", "VCR (Zellers et al., 2018a) is another very recent attempt that focuses on the visual aspects of common sense.", "Such new attempts highlight the breadth of commonsense phenomena, and make it evident that research on common sense has only scratched the surface.", "Thus, there is need for datasets and models that will further our understanding of what is captured by current NLU models, and what are the main lacunae.", "In this work, we present COMMONSENSEQA, a new dataset focusing on commonsense question answering, based on knowledge encoded in CONCEPTNET (Speer et al., 2017).", "We propose a method for generating commonsense questions at scale by asking crowd workers to author questions that describe the relation between concepts from CONCEPTNET (Figure 1).", "A crowd worker observes a source concept ( River' in Figure 1) and three target concepts ( Waterfall' , Bridge' , Val-ley' ) that are all related by the same CONCEPTNET relation ( AtLocation ).", "The worker then authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not.", "This primes the workers to add commonsense knowledge to the question, that separates the target concept from the distractors.", "Finally, for each question, the worker chooses one additional distractor from CONCEPTNET , and authors another distractor manually.", "Thus, in total, five candidate answers accompany each question.", "Because questions are generated freely by workers, they often require background knowledge that is trivial to humans but is seldom explicitly reported on the web due to reporting bias (Gor-don and Van Durme, 2013).", "Thus, questions in COMMONSENSEQA have a different nature compared to prior QA benchmarks, where questions are authored given an input text.", "Using our method, we collected 12,247 commonsense questions.", "We present an analysis that illustrates the uniqueness of the gathered questions compared to prior work, and the types of commonsense skills that are required for tackling it.", "We extensively evaluate models on COMMONSENSEQA, experimenting with pre-trained models, fine-tuned models, and reading comprehension (RC) models that utilize web snippets extracted from Google search on top of the question itself.", "We find that fine-tuning BERT-LARGE (Devlin et al., 2018) on COMMONSENSEQA obtains the best performance, reaching an accuracy of 55.9%.", "This is substantially lower than human performance, which is 88.9%.", "To summarize, our contributions are:", "1. A new QA dataset centered around common sense, containing 12,247 examples.", "2. A new method for generating commonsense questions at scale from CONCEPTNET", ".", "3. An empirical evaluation of state-of-the-art NLU models on COMMONSENSEQA, showing that humans substantially outperform current models.", "The dataset can be downloaded from www.", "tau-nlp.org/commonsenseqa .", "The code for all our baselines is available at github.", "com/jonathanherzig/commonsenseqa .", "Machine common sense, or the knowledge of and ability to reason about an open ended world, has long been acknowledged as a critical component for natural language understanding.", "Early work sought programs that could reason about an environment in natural language (McCarthy, 1959), or leverage a world-model for deeper language understanding (Winograd, 1972).", "Many commonsense representations and inference procedures have been explored (McCarthy and Hayes, 1969; Kowalski and Sergot, 1986) and large-scale commonsense knowledge-bases have been developed (Lenat, 1995; Speer et al., 2017).", "However, evaluating the degree of common sense possessed by a machine remains difficult.", "One important benchmark, the Winograd Schema Challenge (Levesque, 2011), asks models to correctly solve paired instances of coref-erence resolution.", "While the Winograd Schema Challenge remains a tough dataset, the difficulty of generating examples has led to only a small available collection of 150 examples.", "The Choice of Plausible Alternatives (COPA) is a similarly important but small dataset consisting of 500 development and 500 test questions (Roemmele et al., 2011).", "Each question asks which of two alternatives best reflects a cause or effect relation to the premise.", "For both datasets, scalability is an issue when evaluating modern modeling approaches.", "With the recent adoption of crowdsourcing, several larger datasets have emerged, focusing on predicting relations between situations or events in natural language.", "JHU Ordinal Commonsense Inference requests a label from 1-5 for the plausibility that one situation entails another (Zhang et al., 2017).", "The Story Cloze Test (also referred to as ROC Stories) pits ground-truth endings to stories against implausible false ones (Mostafazadeh et al., 2016).", "Interpolating these approaches, Situations with Adversarial Generations (SWAG), asks models to choose the correct description of what happens next after an initial event (Zellers et al., 2018b).", "LM-based techniques achieve very high performance on the Story Cloze Test and SWAG by fine-tuning a pre-trained LM on the target task (Radford et al., 2018; Devlin et al., 2018).", "Investigations of commonsense datasets, and of natural language datasets more generally, have revealed the difficulty in creating benchmarks that measure the understanding of a program rather than its ability to take advantage of distributional biases, and to model the annotation process (Gu-rurangan et al., 2018; Poliak et al., 2018).", "Annotation artifacts in the Story Cloze Test, for example, allow models to achieve high performance while only looking at the proposed endings and ignoring the stories (Schwartz et al., 2017; Cai et al., 2017).", "Thus, the development of benchmarks for common sense remains a difficult challenge.", "Researchers have also investigated question answering that utilizes common sense.", "Science questions often require common sense, and have recently received attention (Clark et al., 2018; Mi-haylov et al., 2018; Ostermann et al., 2018); however, they also need specialized scientific knowledge.", "In contrast to these efforts, our work studies common sense without requiring additional information.", "SQUABU created a small hand-curated test of common sense and science questions (Davis, 2016), which are difficult for current techniques to solve.", "In this work, we create similarly well-crafted questions but at a larger scale.", "Our goal is to develop a method for generating questions that can be easily answered by humans without context, and require commonsense knowledge.", "We generate multiple-choice questions in a process that comprises the following steps.", "each with one source concept and three target concepts.", "2. We ask crowdsourcing workers to author three questions per subgraph (one per target concept), to add two additional distractors per question, and to verify questions' quality.", "3. We add textual context to each question by querying a search engine and retrieving web snippets.", "The entire data generation process is summarized in Figure", "2. We now elaborate on each of the steps: Extraction from CONCEPTNETCONCEPTNET is a graph knowledge-base G C R C , where the nodes C represent natural language concepts, and edges R represent commonsense relations.", "Triplets ( c 1 , r, c 2 ) carry commonsense knowledge such as ( gambler , CapableOf , lose money )'.", "CONCEPTNET contains 32 million triplets.", "To select a subset of triplets for crowdsourcing we take the following steps:", "1. We filter triplets with general relations (e.g., RelatedTo ) or relations that are already well-explored in NLP (e.g., IsA ).", "In total we use 22 relations.", "2. We filter triplets where one of the concepts is more than four words or not in English.", "3. We filter triplets where the edit distance between c 1 and c 2 is too low.", "This results in a set of 236,208 triplets ( q, r, a ) , where we call the first concept the question concept and the second concept the answer concept .", "We aim to generate questions that contain the question concept and where the answer is the answer concept.", "To create multiple-choice questions we need to choose distractors for each question.", "Sampling distractors at random from CONCEPTNET is a bad solution, as such distractors are easy to eliminate using simple surface clues.", "To remedy this, we propose to create question sets : for each question concept q and relation r we group three different triplets { ( q, r, a 1 ) , ( q, r, a 2 ) , ( q, r, a 3 ) } (see Figure 1).", "This generates three answer concepts that are semantically similar and have a similar relation to the question concept q .", "This primes crowd workers to formulate questions that require background knowledge about the concepts in order to answer the question.", "The above procedure generates approximately 130,000 triplets (43,000 question sets), for which we can potentially generate questions.", "Crowdsourcing questions We used Amazon Mechanical Turk (AMT) workers to generate and validate commonsense questions.", "AMT workers saw, for every question set, the question concept and three answer concepts.", "They were asked to formulate three questions, where all questions contain the question concept.", "Each question should have as an answer one of the answer concepts, but not the other two.", "To discourage workers from providing simple surface clues for the answer, they were instructed to avoid using words that have a strong relation to the answer concept, for example, not to use the word open' when the answer is door' .", "Formulating questions for our task is nontrivial.", "Thus, we only accept annotators for which at least 75% of the questions they formulate pass the verification process described below.", "Adding additional distractors To make the task more difficult, we ask crowd-workers to add two additional incorrect answers to each formulated question.", "One distractor is selected from a set of answer concepts with the same relation to the question concept in CONCEPTNET (Figure 1, in red).", "The second distractor is formulated manually by the workers themselves (Figure 1, in pur-ple).", "Workers were encouraged to formulate a distractor that would seem plausible or related to the question but easy for humans to dismiss as incorrect.", "In total, each formulated question is accompanied with five candidate answers, including one Measurement Value # CONCEPTNET distinct question nodes 2,254 # CONCEPTNET distinct answer nodes 12,094 # CONCEPTNET distinct nodes 12,107 # CONCEPTNET distinct relation lables 22 average question length (tokens) 13.41 long questions (more than 20 tokens) 10.3% average answer length (tokens) 1.5 # answers with more than 1 token 44% # of distinct words in questions 14,754 # of distinct words in answers 4,911 Table 1: Key statistics for COMMONSENSEQA correct answer and four distractors.", "Verifying questions quality We train a disjoint group of workers to verify the generated questions.", "Verifiers annotate a question as unanswerable, or choose the right answer.", "Each question is verified by 2 workers, and only questions verified by at least one worker that answered correctly are used.", "This processes filters out 15% of the questions.", "Adding textual context To examine whether web text is useful for answering commonsense questions, we add textual information to each question in the following way: We issue a web query to Google search for every question and candidate answer, concatenating the answer to the question, e.g., What does a parent tell their child to do after they've played with a lot of toys? + clean room' .", "We take the first 100 result snippets for each of the five answer candidates, yielding a context of 500 snippets per question.", "Using this context, we can investigate the performance of reading comprehension (RC) models on COMMONSENSEQA.", "Overall, we generated 12,247 final examples, from a total of 16,242 that were formulated.", "The total cost per question is $0.33.", "Table 1 describes the key statistics of COMMONSENSEQA.", "CONCEPTNET concepts and relations COMMONSENSEQA builds on CONCEPTNET , which contains concepts such as dog , house , or row boat , connected by relations such as Causes , CapableOf , or Antonym .", "The top-5 question concepts in COMMONSENSEQA are Person' (3.1%), People' (2.0%), Human' (0.7%), Water' (0.5%) and Cat' (0.5%).", "In addition, we present the main relations along with the percentage of questions generated from them in Table", "2. It's Relation Formulated question example % AtLocation Where would I not want a fox?", "worth noting that since question formulators were not shown the CONCEPTNET relation, they often asked questions that probe other relationships between the concepts.", "For example, the question What do audiences clap for? was generated from the AtLocation relation, but focuses on social conventions instead.", "Question formulation Question formulators were instructed to create questions with high language variation.", "122 formulators contributed to question generation.", "However, 10 workers formulated more than 85% of the questions.", "We analyzed the distribution of first and second words in the formulated questions along with example questions.", "Figure 4 presents the breakdown.", "Interestingly, only 44% of the first words are WH-words.", "In about 5% of the questions, formulators used first names to create a context story, and in 7% they used the word if to present a hypothetical question.", "This suggests high variability in the question language.", "swer questions in COMMONSENSEQA, we randomly sampled 100 examples from the development set and performed the following analysis.", "For each question, we explicitly annotated the types of commonsense skills that a human uses to answer the question.", "We allow multiple commonsense skills per questions, with an average of 1.75 skills per question.", "Figure 3 provides three example annotations.", "Each annotation contains a node for the answer concept, and other nodes for concepts that appear in the question or latent concepts.", "Labeled edges describe the commonsense skill that relates the two nodes.", "We defined commonsense skills based on the analysis of LoBue and Yates (2011), with slight modifications to accommodate the phenomena in our data.", "Table 3 presents the skill categories we used, their definition and their frequency in the analyzed examples.", "Our goal is to collect a dataset of commonsense questions that are easy for humans, but hard for current NLU models.", "To evaluate this, we experiment with multiple baselines.", "Table 4 summarizes the various baseline types and characterizes them based on", "(a) whether training is done on COMMONSENSEQA or the model is fully pre-trained, The 13% I f t h e r e a r e p e o p l e w a t c h i n g a p r i e s t , w h a t i s h e d o i n g ?", "We now elaborate on the different baselines.", "a VECSIMA model that chooses the answer with highest cosine similarity to the question, where the question and answers are represented by an average of pre-trained word embeddings.", "b LM1B Inspired by Trinh and Le (2018), we employ a large language model (LM) from Joze-fowicz et al. (2016), which was pre-trained on the One Billion Words Benchmark (Chelba et al., 2013).", "We use this model in two variations.", "In the first (LM1BCONCAT ), we simply concatenate each answer to the question.", "In the second (LM1BREP ), we first cluster questions according to their first two words.", "Then, we recognize five high-frequency prefixes that cover 35% of the development set (e.g., what is ).", "We rephrase questions that fit into one of these prefixes as a declarative sentence that contains the answer.", "E.g., we rephrase What is usually next to a door? and the candidate answer wall to Wall is usually next to a door .", "For questions that do not start with the above prefixes, we concatenate the answer as in LM1BCONCAT .", "In both variations we return the answer with highest LM probability.", "c QABILINEAR This model, propsed by Yu et al. (2014) for QA, scores an answer a i with a bilinear model: qW a (cid:62) i , where the question q and answers a i are the average pre-trained word embeddings and W is a learned parameter matrix.", "A softmax layer over the candidate answers is used to train the model with cross-entropy loss.", "d QACOMPARE This model is similar to an NLI model from Liu et al. (2016).", "The model represents the interaction between the question q and a candidate answer a i as: h = relu ([ q ; a i ; q (cid:12) a i ; q a i ] W 1 + b 1 ) , where ' ; ' denotes concatenation and (cid:12) is element-wise product.", "Then, the model predicts an answer score using a feed forward layer: hW 2 + b 2 .", "Average pre-trained embeddings and softmax are used to train the model.", "e ESIM We use ESIM, a strong NLI model (Chen et al., 2016).", "Similar to Zellers et al. (2018b), we change the output layer size to the number of candidate answers, and apply softmax to train with cross-entropy loss.", "f BI DAF++ A state-of-the-art RC model, that uses the retrieved Google web snippets (Section 3) as context.", "We augment BIDAF (Seo et al., 2016) with a self-attention layer and ELMo representations (Peters et al., 2018; Huang et al., 2018).", "g GENERATIVEPRE-TRAINEDTRANSFORMER (GPT) Radford et al. (2018) proposed a method for adapting pre-trained LMs to perform a wide range of tasks.", "We applied their model to COMMONSENSEQA by encoding each question and its candidate answers as a series of delimiter-separated sequences.", "For example, the question If you needed a lamp to do your work, where would you put it? , and the candidate answer bedroom would become [start] If ... ? [sep] bedroom [end] .", "The hidden representations over each [end] token are converted to logits by a linear transformation and passed through a softmax to produce final probabilities for the answers.", "We used the same pre-trained LM and hyper-parameters for fine-tuning as Radford et al. (2018) on ROC Stories, except with a batch size of 10.", "h BERT Similarly to the GPT, BERT fine-tunes a language model and currently holds state-of-the-art across a broad range of tasks (Devlin et al., 2018).", "BERT uses a masked language modeling objective, which predicts missing words masked from unlabeled text.", "To apply BERT to COMMONSENSEQA, we linearize each question-answer pair into a delimiter-separated sequence (i.e., [CLS] If ... ? [SEP] bedroom [SEP] ) then fine-tune the pre-trained weights from uncased BERT-LARGE .", "1 Similarly to the GPT, the hidden representations over each [CLS] token are run through a softmax layer to create the predictions.", "We used the same hyper-parameters as Devlin et al. (2018) for SWAG.", "Experimental Setup We split the data into a training/development/test set with an 80/10/10 split.", "We perform two types of splits:", "(a) random split where questions are split uniformly at random, and", "(b) question concept split where each of the three sets have disjoint question concepts.", "We empirically find (see below) that a random split is harder for models that learn from COMMONSENSEQA, because the same question concept appears in the training set and develop-ment/test set with different answer concepts, and 1 The original weights and code released by Google may be found here: https://github.com/google-research/bert networks that memorize might fail in such a scenario.", "Since the random split is harder, we consider it the primary split of COMMONSENSEQA.", "We evaluate all models on the test set using accuracy (proportion of examples for which prediction is correct), and tune hyper-parameters for all trained models on the development set.", "To understand the difficulty of the task, we add a SANITY mode, where we replace the hard distractors (that share a relation with the question concept and one formulated by a worker) with random CONCEPTNET distractors.", "We expect a reasonable baseline to perform much better in this mode.", "For pre-trained word embeddings we consider 300d GloVe embeddings (Pennington et al., 2014) and 300d Numberbatch CONCEPTNET node embeddings (Speer et al., 2017), which are kept fixed at training time.", "We also combine ESIM with 1024d ELMo contextual representations, which are also fixed during training.", "Human Evaluation To test human accuracy, we created a separate task for which we did not use a qualification test, nor used AMT master workers.", "We sampled 100 random questions and for each question gathered answers from five workers that were not involved in question generation.", "Humans obtain 88.9% accuracy, taking a majority vote for each question.", "Results Table 5 presents test set results for all models and setups.", "The best baselines are BERT-LARGE and GPT with an accuracy of 55.9% and 45.5%, respectively, on the random split (63.6% and 55.5%, respectively, on the question concept split).", "This is well below human accuracy, demonstrating that the benchmark is much easier for humans.", "Nevertheless, this result is much higher than random (20%), showing the ability of language models to store large amounts of information related to commonsense knowledge.", "The top part of Table 5 describes untrained models.", "We observe that performance is higher than random, but still quite low.", "The middle part describes models that were trained on COMMONSENSEQA, where BERT-LARGE obtains best performance, as mentioned above.", "ESIM models follow BERT-LARGE and GPT, and obtain much lower performance.", "We note that ELMo representations did not improve performance compared to GloVe embeddings, possibly because we were unRandom split Question concept split Model Accuracy SANITY Accuracy SANITY VECSIM +N UMBERBATCH 29.1 54.0 30.3 54.9 LM1BREP 26.1 39.6 26.0 39.1 LM1BCONCAT 25.3 37.4 25.3 35.2 VECSIM +G LOVE 22.3 26.8 20.8 27.1 BERT-LARGE 55.9 92.3 63.6 93.2 GPT 45.5 87.2 55.5 88.9 ESIM+ELM O 34.1 76.9 37.9 77.8 ESIM+G LOVE 32.8 79.1 40.4 78.2 QABILINEAR +G LOVE 31.5 74.8 34.2 71.8 ESIM+N UMBERBATCH 30.1 74.6 31.2 75.1 QABILINEAR +N UMBERBATCH 28.8 73.3 32.0 71.6 QACOMPARE +G LOVE 25.7 69.2 34.1 71.3 QACOMPARE +N UMBERBATCH 20.4 60.6 25.2 66.8 BI DAF++ 32.0 71.0 38.4 72.0 HUMAN 88.9 Table 5: Test set accuracy for all models.", "able to improve performance by back-propagating into the representations themselves (as we do in BERT-LARGE and GPT).", "The bottom part shows results for BI DAF++ that uses web snippets as context.", "We observe that using snippets does not lead to high performance, hinting that they do not carry a lot of useful information.", "Performance on the random split is five points lower than the question concept split on average across all trained models.", "We hypothesize that this is because having questions in the develop-ment/test set that share a question concept with the training set, but have a different answer, creates difficulty for networks that memorize the relation between a question concept and an answer.", "Lastly, all SANITY models that were trained on COMMONSENSEQA achieve very high performance (92% for BERT-LARGE ), showing that selecting difficult distractors is crucial.", "Baseline analysis To understand the performance of BERT-LARGE , we analyzed 100 examples from the development set (Table 6).", "We labeled examples with categories (possibly more than one per example) and then computed the average accuracy of the model for each category.", "We found that the model does well (77.7% accuracy) on examples where surface clues hint to the correct answer.", "Examples that involve negation or understanding antonyms have lower accuracy (42.8%), similarly to examples that require factoid knowledge (38.4%).", "Accuracy is particularly low in questions where the correct answer has finer granularity compared to one of the distractors (35.4%), and in cases where the correct answer needs to meet a conjunction of conditions, and the distractor meets only one of them (23.8%).", "Learning Curves To extrapolate how current models might perform with more data, we evaluated BERT-large on the development set, training with varying amounts of data.", "The resulting learning curves are plotted in figure 5.", "For each training set size, hyper-parameters were identical to section 5, except the number of epochs was varied to keep the number of mini-batches during training constant.", "To deal with learning instabilities, each data point is the best of 3 runs.", "We observe that the accuracy of BERT-LARGE is expected to be roughly 75% assuming 100k examples, still subFigure 5: Development accuracy for BERT-LARGE trained with varying amounts of data.", "stantially lower than human performance.", "We present COMMONSENSEQA, a new QA dataset that contains 12,247 examples and aims to test commonsense knowledge.", "We describe a process for generating difficult questions at scale using CONCEPTNET , perform a detailed analysis of the dataset, which elucidates the unique properties of our dataset, and extensively evaluate on a strong suite of baselines.", "We find that the best model is a pre-trained LM tuned for our task and obtains 55.9% accuracy, dozens of points lower than human accuracy.", "We hope that this dataset facilitates future work in incorporating commonsense knowledge into NLU systems.", "We thank the anonymous reviewers for their constructive feedback.", "This work was completed in partial fulfillment for the PhD degree of Jonathan Herzig, which was also supported by a Google PhD fellowship.", "This research was partially supported by The Israel Science Foundation grant 942/16, The Blavatnik Computer Science Research Fund and The Yandex Initiative for Machine Learning." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "objective", "result", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "result", "abstain", "objective", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "objective", "method", "result", "abstain", "other", "other", "other" ]
[ "We present a novel deep learning architecture to address the natural language inference (NLI) task.", "Existing approaches mostly rely on simple reading mechanisms for independent encoding of the premise and hypothesis.", "Instead, we propose a novel dependent reading bidirectional LSTM network (DR-BiLSTM) to efficiently model the relationship between a premise and a hypothesis during encoding and inference.", "We also introduce a sophisticated ensemble strategy to combine our proposed models, which noticeably improves final predictions.", "Finally, we demonstrate how the results can be improved further with an additional preprocessing step.", "Our evaluation shows that DR-BiLSTM obtains the best single model and ensemble model results achieving the new state-of-the-art scores on the Stanford NLI dataset.", "Natural Language Inference (NLI; a.k.a. Recognizing Textual Entailment, or RTE) is an important and challenging task for natural language understanding (MacCartney and Manning, 2008).", "The goal of NLI is to identify the logical relationship ( entailment , neutral , or contradiction ) between a premise and a corresponding hypothesis.", "Table 1 shows few example relationships from the Stanford Natural Language Inference (SNLI) dataset (Bowman et al., 2015).", "availArXiv version of this work can be found here (arxiv.org/pdf/1802.05577.pdf).", "This work was conducted as part of an internship program at Philips Research.", "ability of large annotated datasets like SNLI (Bow-man et al., 2015).", "Various deep learning models have been proposed that achieve successful results for this task (Gong et al., 2017; Wang et al., 2017; Chen et al., 2017; Yu and Munkhdalai, 2017a; Parikh et al., 2016; Zhao et al., 2016; Sha et al., 2016).", "Most of these existing NLI models use attention mechanism to jointly interpret and align the premise and hypothesis.", "Such models use simple reading mechanisms to encode the premise and hypothesis independently.", "However, such a complex task require explicit modeling of dependency relationships between the premise and the hypothesis during the encoding and inference processes to prevent the network from the loss of relevant, contextual information.", "In this paper, we refer to such strategies as dependent reading .", "There are some alternative reading mechanisms available in the literature (Sha et al., 2016; Rocktaschel et al., 2015) that consider dependency aspects of the premise-hypothesis relationships.", "However, these mechanisms have two major limitations: So far, they have only explored dependency aspects during the encoding stage, while ignoring its benefit during inference.", "Such models only consider encoding a hy-1460 pothesis depending on the premise, disregarding the dependency aspects in the opposite direction.", "We propose a dependent reading bidirectional LSTM (DR-BiLSTM) model to address these limitations.", "Given a premise u and a hypothesis v , our model first encodes them considering dependency on each other ( u | v and v | u ).", "Next, the model employs a soft attention mechanism to extract relevant information from these encodings.", "The augmented sentence representations are then passed to the inference stage, which uses a similar dependent reading strategy in both directions, i.e. u v and v u .", "Finally, a decision is made through a multi-layer perceptron (MLP) based on the aggregated information.", "Our experiments on the SNLI dataset show that DR-BiLSTM achieves the best single model and ensemble model performance obtaining improvements of a considerable margin of 0 .", "4% and 0 .", "3% over the previous state-of-the-art single and ensemble models, respectively.", "Furthermore, we demonstrate the importance of a simple preprocessing step performed on the SNLI dataset.", "Evaluation results show that such preprocessing allows our single model to achieve the same accuracy as the state-of-the-art ensemble model and improves our ensemble model to outperform the state-of-the-art ensemble model by a remarkable margin of 0 .", "7% .", "Finally, we perform an extensive analysis to clarify the strengths and weaknesses of our models.", "Early studies use small datasets while leveraging lexical and syntactic features for NLI (Mac-Cartney and Manning, 2008).", "The recent availability of large-scale annotated datasets (Bowman et al., 2015; Williams et al., 2017) has enabled researchers to develop various deep learning-based architectures for NLI.", "Parikh et al. (2016) propose an attention-based model (Bahdanau et al., 2014) that decomposes the NLI task into sub-problems to solve them in parallel.", "They further show the benefit of adding intra-sentence attention to input representations.", "Chen et al. (2017) explore sequential inference models based on chain LSTMs with attentional input encoding and demonstrate the effectiveness of syntactic information.", "We also use similar attention mechanisms.", "However, our model is distinct from these models as they do not benefit from dependent reading strategies.", "Rocktaschel et al. (2015) use a word-by-word neural attention mechanism while Sha et al. (2016) propose re-read LSTM units by considering the dependency of a hypothesis on the information of its premise ( v | u ) to achieve promising results.", "However, these models suffer from weak inferencing methods by disregarding the dependency aspects from the opposite direction ( u | v ).", "Intuitively, when a human judges a premise-hypothesis relationship, s/he might consider back-and-forth reading of both sentences before coming to a conclusion.", "Therefore, it is essential to encode the premise-hypothesis dependency relations from both directions to optimize the understanding of their relationship.", "Wang et al. (2017) propose a bilateral multi-perspective matching (BiMPM) model, which resembles the concept of matching a premise and hypothesis from both directions.", "Their matching strategy is essentially similar to our attention mechanism that utilizes relevant information from the other sentence for each word sequence.", "They use similar methods as Chen et al. (2017) for encoding and inference, without any dependent reading mechanism.", "Although NLI is well studied in the literature, the potential of dependent reading and interaction between a premise and hypothesis is not rigorously explored.", "In this paper, we address this gap by proposing a novel deep learning model (DR-BiLSTM).", "Experimental results demonstrate the effectiveness of our model.", "Our proposed model (DR-BiLSTM) is composed of the following major components: input encoding, attention, inference, and classification.", "Figure 1 demonstrates a high-level view of our proposed NLI framework.", "Let u = [ u 1 , , u n ] and v = [ v 1 , , v m ] be the given premise with length n and hypothesis with length m respectively, where u i , v j R r is an word embedding of r -dimensional vector.", "The task is to predict a label y that indicates the logical relationship between premise u and hypothesis v .", "RNNs are the natural solution for variable length sequence modeling, consequently, we utilize", "bidirectional LSTM (BiLSTM) (Hochreiter and Schmidhuber, 1997) for encoding the given sentences.", "For ease of presentation, we only describe how we encode u depending on v .", "The same procedure is utilized for the reverse direction ( v | u ).", "To dependently encode u , we first process v using the BiLSTM.", "Then we read u through the BiLSTM that is initialized with previous reading final states (memory cell and hidden state).", "Here we represent a word (e.g. u i ) and its context depending on the other sentence (e.g. v ).", "Equations 1 and 2 formally represent this component.", "v, s v = BiLSTM ( v,", "0) u, = BiLSTM ( u, s v ) (1) u, s u = BiLSTM ( u,", "0) v, = BiLSTM ( v, s u ) (2) where { u R n 2 d , u R n 2 d , s u } and { v R m 2 d , v R m 2 d , s v } are the independent reading sequences, dependent reading sequences, and BiLSTM final state of independent reading of u and v respectively.", "Note that, in these equations means that we do not care about the associated variable and its value.", "BiLSTM inputs are the word embedding sequences and initial state vectors.", "u and v are passed to the next layer as the output of the input encoding component.", "The proposed encoding mechanism yields a richer representation for both premise and hypothesis by taking the history of each other into account.", "Using a max or average pooling over the independent and dependent readings does not further improve our model.", "This was expected since dependent reading produces more promising and relevant encodings.", "We employ a soft alignment method to associate the relevant sub-components between the given premise and hypothesis.", "In deep learning models, such purpose is often achieved with a soft attention mechanism.", "Here we compute the unnormalized attention weights as the similarity of hidden states of the premise and hypothesis with Equation 3 (energy function).", "where u i and v j are the dependent reading hidden representations of u and v respectively which are computed earlier in Equations 1 and 2.", "Next, for each word in either premise or hypothesis, the relevant semantics in the other sentence is extracted and composed according to e ij .", "Equations 4 and 5 provide formal and specific details of this procedure.", "u i = m X j =1 exp( e ij ) P mk =1 exp( e ik ) v j , i [1 , n ] (4) v j = n X i =1 exp( e ij ) P nk =1 exp( e kj ) u i , j [1 , m ] (5) where u i represents the extracted relevant information of v by attending to u i while v j represents the extracted relevant information of u by attending to v j .", "To further enrich the collected attentional information, a trivial next step would be to pass the concatenation of the tuples ( u i , u i ) or ( v j , v j ) which provides a linear relationship between them.", "However, the model would suffer from the absence of similarity and closeness measures.", "Therefore, we calculate the difference and element-wise product for the tuples ( u i , u i ) and ( v j , v j ) that represent the similarity and closeness information respectively (Chen et al., 2017; Kumar et al., 2016).", "The difference and element-wise product are then concatenated with the computed vectors, ( u i , u i ) or ( v j , v j ) , respectively.", "Finally, a feedforward neural layer with ReLU activation function projects the concatenated vectors from 8 d dimensional vector space into a d -dimensional vector space (Equations 6 and 7).", "This helps the model to capture deeper dependencies between the sentences besides lowering the complexity of vector representations.", "a i = [ u i , u i , u i u i , u i (cid:12) u i ] p i = ReLU ( W p a i + b p ) (6) b j = [ v j , v j , v j v j , v j (cid:12) v j ] q j = ReLU ( W p b j + b p ) (7) Here (cid:12) stands for element-wise product while W p R 8 d d and b p R d are the trainable weights and biases of the projector layer respectively.", "During this phase, we use another BiLSTM to aggregate the two sequences of computed matching vectors, p and q from the attention stage (Sec-tion 3.2).", "This aggregation is performed in a sequential manner to avoid losing effect of latent variables that might rely on the sequence of matching vectors.", "Instead of aggregating the sequences of matching vectors individually, we propose a similar dependent reading approach for the inference stage.", "We employ a BiLSTM reading process (Equa-tions 8 and", "9) similar to the input encoding step discussed in Section 3.1.", "But rather than passing just the dependent reading information to the next step, we feed both independent reading ( p and q ) and dependent reading ( p and q ) to a max pooling layer, which selects maximum values from each sequence of independent and dependent readings ( p i and p i ) as shown in Equations 10 and 11.", "The main intuition behind this architecture is to maximize the inferencing ability of the model by considering both independent and dependent readings.", "q, s q = BiLSTM ( q,", "0) p, = BiLSTM ( p, s q ) (8) p, s p = BiLSTM ( p,", "0) q, = BiLSTM ( q, s p ) (9) p = MaxPooling ( p, p ) (10) q = MaxPooling ( q, q ) (11) Here { p R n 2 d , p R n 2 d , s p } and { q R m 2 d , q R m 2 d , s q } are the independent reading sequences, dependent reading sequences, and BiLSTM final state of independent reading of p and q respectively.", "BiLSTM inputs are the word embedding sequences and initial state vectors.", "Finally, we convert p R n 2 d and q R m 2 d to fixed-length vectors with pooling, U R 4 d and V R 4 d .", "As shown in Equations 12 and 13, we employ both max and average pooling and describe the overall inference relationship with concatenation of their outputs.", "U = [ MaxPooling ( p ) , AvgPooling ( p )] (12) V = [ MaxPooling ( q ) , AvgPooling ( q )] (13) 3.4 Classification Here, we feed the concatenation of U and V ( [ U, V ] ) into a multilayer perceptron (MLP) classifier that includes a hidden layer with tanh activation and softmax output layer.", "The model is trained in an end-to-end manner.", "The Stanford Natural Language Inference (SNLI) dataset contains 570 K human annotated sentence pairs.", "The premises are drawn from the Flickr30k (Plummer et al., 2015) corpus, and then the hypotheses are manually composed for each relationship class ( entailment , neutral , contradiction , and ).", "The class indicates that there is no consensus decision among the annotators, consequently, we remove them during the training and evaluation following the literature.", "We use the same data split as provided in Bowman et al. (2015) to report comparable results with other models.", "We use pre-trained 300 D Glove 840 B vectors (Pennington et al., 2014) to initialize our word embedding vectors.", "All hidden states of BiLSTMs during input encoding and inference have 450 dimensions ( r = 300 and d = 450 ).", "The weights are learned by minimizing the log-loss on the training data via the Adam optimizer (Kingma and Ba, 2014).", "The initial learning rate is 0.0004.", "To avoid overfitting, we use dropout (Srivastava et al., 2014) with the rate of 0.4 for regularization, which is applied to all feedforward connections.", "During training, the word embeddings are updated to learn effective representations for the NLI task.", "We use a fairly small batch size of 32 to provide more exploration power to the model.", "Our observation indicates that using larger batch sizes hurts the performance of our model.", "Ensemble methods use multiple models to obtain better predictive performance.", "Previous works typically utilize trivial ensemble strategies by either using majority votes or averaging the probability distributions over the same model with different initialization seeds (Wang et al., 2017; Gong et al., 2017).", "By contrast, we use weighted averaging of the probability distributions where the weight of each model is learned through its performance on the SNLI development set.", "Furthermore, the differences between our models in the ensemble originate from:", "1) variations in the number of dependent readings (i.e. 1 and 3 rounds of dependent reading),", "2) projection layer activation ( tanh and T r a i n D e v T e s t 1 2 3 4 5 6 7 8 94.2 94.4 94.6 94.8 88.8 89.0 89.2 88.6 88.8 89.0 89.2 Number of Models A cc u r acy Figure 2: Performance of n ensemble models reported for training (red, top), development (blue, middle), and test (green, bottom) sets of SNLI.", "The main intuition behind this design is that the effectiveness of a model may depend on the complexity of a premise-hypothesis instance.", "For a simple instance, a simple model could perform better than a complex one, while a complex instance may need further consideration toward disambiguation.", "Consequently, using models with different rounds of dependent readings in the encoding stage should be beneficial.", "Figure 2 demonstrates the observed performance of our ensemble method with different number of models.", "The performance of the models are reported based on the best obtained accuracy on the development set.", "We also study the effectiveness of other ensemble strategies e.g. majority voting, and averaging the probability distributions.", "But, our ensemble strategy performs the best among them (see Section 1 in the supplementary material for additional details).", "We perform a trivial preprocessing step on SNLI to recover some out-of-vocabulary words found in the development set and test set.", "Note that our vocabulary contains all words that are seen in the training set, so there is no out-of-vocabulary word in it.", "The SNLI dataset is not immune to human 1464 errors, specifically, misspelled words.", "We noticed that misspelling is the main reason for some of the observed out-of-vocabulary words.", "Consequently, we simply fix the unseen misspelled words using Microsoft spell-checker (other approaches like edit distance can also be used).", "Moreover, while dealing with an unseen word during evaluation, we try to:", "1) replace it with its lower case, or", "2) split the word when it contains a (e.g. marsh-like) or starts with un (e.g. unloading).", "If we still could not find the word in our vocabulary, we consider it as an unknown word.", "In the next subsection, we demonstrate the importance and impact of such trivial preprocessing (see Section 2 in the supplementary material for additional details).", "Table 2 shows the accuracy of the models on training and test sets of SNLI.", "The first row represents a baseline classifier presented by Bowman et al. (2015) that utilizes handcrafted features.", "All other listed models are deep-learning based.", "The gap between the traditional model and deep learning models demonstrates the effectiveness of deep learning methods for this task.", "We also report the estimated human performance on the SNLI dataset, which is the average accuracy of five annotators in comparison to the gold labels (Gong et al., 2017).", "It is noteworthy that recent deep learning models surpass the human performance in the NLI task.", "As shown in Table 2, previous deep learning models (rows 2-19) can be divided into three categories:", "1) sentence encoding based models (rows 2-7),", "2) single inter-sentence attention-based models (rows 8-16), and", "3) ensemble inter-sentence attention-based models (rows 17-19).", "We can see that inter-sentence attention-based models perform better than sentence encoding based models, which supports our intuition.", "Natural language inference requires a deep interaction between the premise and hypothesis.", "Inter-sentence attention-based approaches can provide such interaction while sentence encoding based models fail to do so.", "To further enhance the modeling of interaction between the premise and hypothesis for efficient disambiguation of their relationship, we introduce the dependent reading strategy in our proposed DR-BiLSTM model.", "The results demonstrate the effectiveness of our model.", "DR-BiLSTM (Single) Model Accuracy Train Test (Bowman et al., 2015) (Feature) 99.7% 78.2% (Bowman et al., 2015) 83.9% 80.6% (Vendrov et al., 2015) 98.8% 81.4% (Mou et al., 2016) 83.3% 82.1% (Bowman et al., 2016) 89.2% 83.2% (Liu et al., 2016b) 84.5% 84.2% (Yu and Munkhdalai, 2017a) 86.2% 84.6% (Rocktaschel et al., 2015) 85.3% 83.5% (Wang and Jiang, 2016) 92.0% 86.1% (Liu et al., 2016a) 88.5% 86.3% (Parikh et al., 2016) 90.5% 86.8% (Yu and Munkhdalai, 2017b) 88.5% 87.3% (Sha et al., 2016) 90.7% 87.5% (Wang et al., 2017) (Single) 90.9% 87.5% (Chen et al., 2017) (Single) 92.6% 88.0% (Gong et al., 2017) (Single) 91.2% 88.0% (Chen et al., 2017) (Ensemble) 93.5% 88.6% (Wang et al., 2017) (Ensemble) 93.2% 88.8% (Gong et al., 2017) (Ensemble) 92.3% 88.9% Human Performance (Estimated) 97.2% 87.7% DR-BiLSTM (Single) 94.1% 88.5% DR-BiLSTM (Single) + Process 94.1% 88.9% DR-BiLSTM (Ensemble) 94.8% 89.3% DR-BiLSTM (Ensem.) + Process 94.8% 89.6% Table 2: Accuracies of the models on the training set and test set of SNLI.", "achieves 88 .", "5% accuracy on the test set which is noticeably the best reported result among the existing single models for this task.", "Note that the difference between DR-BiLSTM and Chen et al. (2017) is statistically significant with a p-value of < 0 .", "001 over the Chi-square test 1 .", "To further improve the performance of NLI systems, researchers have built ensemble models.", "Previously, ensemble systems obtained the best performance on SNLI with a huge margin.", "Table 2 shows that our proposed single model achieves competitive results compared to these reported ensemble models.", "Our ensemble model considerably outperforms the current state-of-the-art by obtaining 89 .", "3% accuracy.", "Up until this point, we discussed the performance of our models where we have not con-1 Chi-square test ( 2 test) is used to determine if there is a significant difference between two categorical variables (i.e. models' outputs).", "sidered preprocessing for recovering the out-of-vocabulary words.", "In Table 2, DR-BiLSTM (Sin-gle) + Process, and DR-BiLSTM (Ensem.) + Process represent the performance of our models on the preprocessed dataset.", "We can see that our preprocessing mechanism leads to further improvements of 0 .", "4% and 0 .", "3% on the SNLI test set for our single and ensemble models respectively.", "In fact, our single model (DR-BiLSTM (Single) + Process) obtains the state-of-the-art performance over both reported single and ensemble models by performing a simple preprocessing step.", "Furthermore, DR-BiLSTM (Ensem.) + Process outperforms the existing state-of-the-art remarkably ( 0 . 7% improvement).", "For more comparison and analyses, we use DR-BiLSTM (Sin-gle) and DR-BiLSTM (Ensemble) as our single and ensemble models in the rest of the paper.", "We conducted an ablation study on our model to examine the importance and effect of each major component.", "Then, we study the impact of BiLSTM dimensionality on the performance of the development set and training set of SNLI.", "We investigate all settings on the development set of the SNLI dataset.", "Table 3 shows the ablation study results on the development set of SNLI along with the statistical significance test results in comparison to the proposed model, DR-BiLSTM.", "We can see that all modifications lead to a new model and their differences are statistically significant with a p-value of < 0 .", "001 over Chi square test.", "Table 3 shows that removing any part from our model hurts the development set accuracy which indicates the effectiveness of these components.", "Among all components, three of them have noticeable influences: max pooling, difference in the attention stage, and dependent reading.", "Most importantly, the last four study cases in Table 3 (rows 8-11) verify the main intuitions behind our proposed model.", "They illustrate the importance of our proposed dependent reading strategy which leads to significant improvement, specifically in the encoding stage.", "We are convinced that the importance of dependent reading in the encoding stage originates from its ability to focus on more important and relevant aspects of the sentences due to its prior knowledge of the other sentence during the encoding procedure.", "Figure 3 shows the behavior of the proposed model accuracy on the training set and development set of SNLI.", "Since the models are selected based on the best observed development set accuracy during the training procedure, the training accuracy curve (red, top) is not strictly increasing.", "Figure 3 demonstrates that we achieve the best performance with 450-dimensional BiLSTMs.", "In other words, using BiLSTMs with lower dimensionality causes the model to suffer from the lack of space for capturing proper information and dependencies.", "On the other hand, using higher dimensionality leads to overfitting which hurts the performance on the development set.", "Hence, we use 450-dimensional BiLSTM in our proposed 1466 model.", "We first investigate the performance of our models categorically.", "Then, we show a visualization of the energy function in the attention stage (Equation", "3) for an instance from the SNLI test set.", "To qualitatively evaluate the performance of our models, we design a set of annotation tags that can be extracted automatically.", "This design is inspired by the reported annotation tags in Williams et al. (2017).", "The specifications of our annotation tags are as follows: High Overlap: premise and hypothesis sentences share more than 70% tokens.", "Regular Overlap: sentences share between 30% and 70% tokens.", "Low Overlap: sentences share less than 30% tokens.", "Long Sentence: either sentence is longer than 20 tokens.", "Regular Sentence: premise or hypothesis length is between 5 and 20 tokens.", "Short Sentence: either sentence is shorter than 5 tokens.", "Negation: negation is present in a sentence.", "Quantifier: either of the sentences contains one of the following quantifiers: much, enough, more, most, less, least, no, none, some, any, many, few, several, almost, nearly.", "Belief: either of the sentences contains one of the following belief verbs: know, believe, understand, doubt, think, suppose, recognize, forget, remember, imagine, mean, agree, disagree, deny, promise.", "Table 4 shows the frequency of aforementioned annotation tags in the SNLI test set along with the performance (accuracy) of ESIM (Chen et al., 2017), DR-BiLSTM (Single), and DR-BiLSTM (Ensemble).", "Table 4 can be divided into four major categories:", "1) gold label data,", "2) word overlap,", "3) sentence length, and", "4) occurrence of special words.", "We can see that DR-BiLSTM (Ensemble) performs the best in all categories which matches our expectation.", "Moreover, DR-BiLSTM (Single) Annotation Tag Freq a ESIM DR(S) b DR(E) c Entailment 34.3% 90.0% 89.8% 90.9% Neutral 32.8% 83.7% 85.1% 85.6% Contradiction 32.9% 90.0% 90.5% 91.4% High Overlap 24.3% 91.2% 90.7% 92.1% Reg.", "b DR(S), DR-BiLSTM (Single).", "c DR(E), DR-BiLSTM (Ensemble).", "performs noticeably better than ESIM in most of the categories except Entailment, High Over-lap, and Long Sentence, for which our model is not far behind (gaps of 0 . 2% , 0 . 5% , and 0 . 9% , respectively).", "It is noteworthy that DR-BiLSTM (Single) performs better than ESIM in more frequent categories.", "Specifically, the performance of our model in Neutral, Negation, and Quan-tifier categories (improvements of 1 . 4% , 3 . 5% , and 1 . 9% , respectively) indicates the superiority of our model in understanding and disambiguating complex samples.", "Our investigations indicate that ESIM generates somewhat uniform attention for most of the word pairs while our model could effectively attend to specific parts of the given sentences and provide more meaningful attention.", "In other words, the dependent reading strategy enables our model to achieve meaningful representations, which leads to better attention to obtain further gains on such categories like Negation and Quantifier sentences (see Section 3 in the supplementary material for additional details).", "Finally, we show a visualization of the normalized attention weights (energy function, Equation", "3) of our model in Figure 4.", "We show a sentence pair, where the premise is Male in a blue jacket decides to lay the grass. , and the hypothesis is The guy in yellow is rolling on the grass. , and its logical relationship is contradiction .", "Figure 4 indicates the model's ability in attending to critical pairs of words like < Male, guy > , < decides, rolling > , and < lay, rolling > .", "Finally, high attention between { decides, lay } and 1467 _FOL_ The guy in yellow is rolling on the grass _EOL_ _ FOL _ M a l e i n a b l u e j acke t d ec i d es t o l ay t h e g r ass .", "{ rolling } , and { Male } and { guy } leads the model to correctly classify the sentence pair as contradiction (for more samples with attention visualizations, see Section 4 in the supplementary ma-terial).", "We propose a novel natural language inference model (DR-BiLSTM) that benefits from a dependent reading strategy and achieves the state-of-the-art results on the SNLI dataset.", "We also introduce a sophisticated ensemble strategy and illustrate its effectiveness through experimentation.", "Moreover, we demonstrate the importance of a simple preprocessing step on the performance of our proposed models.", "Evaluation results show that the preprocessing step allows our DR-BiLSTM (sin-gle) model to outperform all previous single and ensemble methods.", "Similar superior performance is also observed for our DR-BiLSTM (ensemble) model.", "We show that our ensemble model outperforms the existing state-of-the-art by a considerable margin of 0 .", "7% .", "Finally, we perform an extensive analysis to demonstrate the strength and weakness of the proposed model, which would pave the way for further improvements in this do-main." ]
[ "objective", "abstain", "objective", "objective", "objective", "objective", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "objective", "result", "abstain", "method", "other", "other", "other", "other", "other", "method", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "result", "result", "result", "abstain", "objective" ]
[ "Current work on multimodal machine translation (MMT) has suggested that the visual modality is either unnecessary or only marginally beneficial.", "We posit that this is a consequence of the very simple, short and repetitive sentences used in the only available dataset for the task (Multi30K), rendering the source text sufficient as context.", "In the general case, however, we believe that it is possible to combine visual and textual information in order to ground translations.", "In this paper we probe the contribution of the visual modality to state-of-the-art MMT models by conducting a systematic analysis where we partially deprive the models from source-side textual context.", "Our results show that under limited textual context, models are capable of leveraging the visual input to generate better translations.", "This contradicts the current belief that MMT models disregard the visual modality because of either the quality of the image features or the way they are integrated into the model.", "Multimodal Machine Translation (MMT) aims at designing better translation systems which take into account auxiliary inputs such as images.", "Initially organized as a shared task within the First Conference on Machine Translation (WMT16) (Specia et al., 2016), MMT has so far been studied using the Multi30K dataset (Elliott et al., 2016), a multilingual extension of Flickr30K (Young et al., 2014) with translations of the English image descriptions into German, French and Czech (Elliott et al., 2017; Barrault et al., 2018).", "The three editions of the shared task have seen many exciting approaches that can be broadly categorized as follows:", "(i) multimodal attention using convolutional features (Caglayan et al., 2016; Calixto et al., 2016; Libovicky and Helcl, 2017; Helcl et al., 2018)", "(ii) cross-modal interactions with spatially-unaware global features (Calixto and Liu, 2017; Ma et al., 2017; Caglayan et al., 2017a; Madhyastha et al., 2017) and", "(iii) the integration of regional features from object detection networks (Huang et al., 2016; Gronroos et al., 2018).", "Nevertheless, the conclusion about the contribution of the visual modality is still unclear: Gronroos et al. (2018) consider their multimodal gains modest and attribute the largest gain to the usage of external parallel corpora.", "Lala et al. (2018) observe that their multimodal word-sense disambiguation approach is not significantly different than the monomodal counterpart.", "The organizers of the latest edition of the shared task concluded that the multimodal integration schemes explored so far resulted in marginal changes in terms of automatic metrics and human evaluation (Barrault et al., 2018).", "In a similar vein, Elliott (2018) demonstrated that MMT models can translate without significant performance losses even in the presence of features from unrelated images.", "These empirical findings seem to indicate that images are ignored by the models and hint at the fact that this is due to representation or modeling limitations.", "We conjecture that the most plausible reason for the linguistic dominance is that at least in Multi30K the source text is sufficient to perform the translation, eventually preventing the visual information from intervening in the learning process.", "To investigate this hypothesis, we introduce several input degradation regimes (Sec-tion 2) and revisit state-of-the-art MMT models (Section 3) to assess their behavior under degraded regimes.", "We further probe the visual sensitivity by deliberately feeding features from unrelated images.", "Our results (Section 4) show that MMT models successfully exploit the visual modality when the linguistic context is scarce, but indeed tend to be less sensitive to this modality when exposed to complete sentences.", "In this section we propose several degradations to the input language modality to simulate conditions where sentences may miss crucial information.", "We denote a set of translation pairs by D and indicate degraded variants with subscripts.", "Both the training and the test sets are degraded.", "Color Deprivation.", "We consistently replace source words that refer to colors with a special token [v] ( DC in Table 1).", "Our hypothesis is that a monomodal system will have to rely on source-side contextual information and biases, while a multimodal architecture could potentially capitalize on color information extracted by exploiting the image and thus obtain better performance.", "This affects 3.3% and 3.1% of the words in the training and the test set, respectively.", "Entity Masking.", "The Flickr30K dataset, from which Multi30K is derived, has also been extended with coreference chains to tag mentions of visually depictable entities in image descriptions (Plummer et al., 2015).", "We use these to mask out the head nouns in the source sentences ( DN in Table 1).", "This affects 26.2% of the words in both the training and the test set.", "We hypothesize that a multimodal system should heavily rely on the images to infer the missing parts.", "Progressive Masking.", "A progressively degraded variant D k replaces all but the first k tokens of source sentences with [v] .", "Unlike the color deprivation and entity masking, masking out suffixes does not guarantee systematic removal of visual context, but rather simulates an increasingly low-resource scenario.", "Overall, we form 16 degraded variants D k (Table 1) where k { 0 , 2 , . . . , 30 } .", "We stop at D 30 since 99.8% of the sentences in Multi30K are shorter than 30 words with an average sentence length of 12 words.", "D 0 where the only remaining information is the source sentence length is an interesting case from two perspectives: a neural machine translation (NMT) model trained on it resembles a target language model, while an MMT model becomes an image captioner with access to expected length information.", "Visual Sensitivity.", "Inspired by Elliott (2018), we experiment with incongruent decoding in order to understand how sensitive the multimodal systems are to the visual modality.", "This is achieved D a lady in a blue dress singing DC a lady in a [v] dress singing DN a [v] in a blue [v] singing D 4 a lady in a [v] [v] [v] D 2 a lady [v] [v] [v] [v] [v] D 0 [v] [v] [v] [v] [v] [v] [v] Table 1: An example of the proposed input degradation schemes: D is the original sentence.", "by explicitly violating the test-time semantic congruence across modalities.", "Specifically, we feed the visual features in reverse sample order to break image-sentence alignments.", "Consequently, a model capable of integrating the visual modality would likely deteriorate in terms of metrics.", "Dataset.", "We conduct experiments on the English French part of Multi30K.", "The models are trained on the concatenation of the train and val sets (30K sentences) whereas test2016 (dev) and test2017 (test) are used for early-stopping and model evaluation, respectively.", "For entity masking , we revert to the default Flickr30K splits and perform the model evaluation on test2016 , since test2017 is not annotated for entities.", "We use word-level vocabularies of 9,951 English and 11,216 French words.", "We use Moses (Koehn et al., 2007) scripts to lowercase, normalize and tokenize the sentences with hyphen splitting.", "The hyphens are stitched back prior to evaluation.", "Visual Features.", "We use a ResNet-50 CNN (He et al., 2016) trained on ImageNet (Deng et al., 2009) as image encoder.", "Prior to feature extraction, we center and standardize the images using ImageNet statistics, resize the shortest edge to 256 pixels and take a center crop of size 256x256.", "We extract spatial features of size 2048x8x8 from the final convolutional layer and apply L 2 normalization along the depth dimension (Caglayan et al., 2018).", "For the non-attentive model, we use the 2048-dimensional global average pooled version (pool5) of the above convolutional features.", "Models.", "Our baseline NMT is an attentive model (Bahdanau et al., 2014) with a 2-layer bidirectional GRU encoder (Cho et al., 2014) and a 2-layer conditional GRU decoder (Sennrich et al., 2017).", "The second layer of the decoder receives the output of the attention layer as input.", "For the MMT model, we explore the basic multimodal attention (DIRECT) (Caglayan et al., 2016) and its hierarchical (HIER) extension (Li-bovicky and Helcl, 2017).", "The former linearly projects the concatenation of textual and visual context vectors to obtain the multimodal context vector, while the latter replaces the concatenation with another attention layer.", "Finally, we also experiment with encoder-decoder initialization (INIT) (Calixto and Liu, 2017; Caglayan et al., 2017a) where we initialize both the encoder and the decoder using a non-linear transformation of the pool5 features.", "Hyperparameters.", "The encoder and decoder GRUs have 400 hidden units and are initialized with 0 except the multimodal INIT system.", "All embeddings are 200-dimensional and the decoder embeddings are tied (Press and Wolf, 2016).", "A dropout of 0.4 and 0.5 is applied on source embeddings and encoder/decoder outputs, respectively (Srivastava et al., 2014).", "The weights are decayed with a factor of 1 e 5 .", "We use ADAM (Kingma and Ba, 2014) with a learning rate of 4 e 4 and mini-batches of 64 samples.", "The gradients are clipped if the total norm exceeds 1 (Pascanu et al., 2013).", "The training is early-stopped if dev set METEOR (Denkowski and Lavie, 2014) does not improve for ten epochs.", "All experiments are conducted with nmtpytorch 1 (Caglayan et al., 2017b).", "We train all systems three times each with different random initialization in order to perform significance testing with multeval (Clark et al., 2011).", "Throughout the section, we always report the mean over three runs (and the standard deviation) of the considered metrics.", "We decode the translations with a beam size of 12.", "1 github.com/lium-lst/nmtpytorch Figure 1: Entity masking: all masked MMT models are significantly better than the masked NMT (dashed).", "We first present test2017 METEOR scores for the baseline NMT and MMT systems, when trained on the full dataset D (Table 2).", "The first column indicates that, although MMT models perform slightly better on average, they are not significantly better than the baseline NMT.", "We now introduce and discuss the results obtained under the proposed degradation schemes.", "Please refer to Table 5 and the appendix for qualitative examples.", "Unlike the inconclusive results for D , we observe that all MMT models are significantly better than NMT when color deprivation is applied ( DC in Table 2).", "If we further focus on the subset of the test set subjected to color deprivation (247 sen-tences), the gain increases to 1.6 METEOR for HIER.", "For the latter subset, we also computed the average color accuracy per sentence and found that the attentive models are 12% better than the NMT (32.5 44.5) whereas the INIT model only brings 4% (32.5 36.5) improvement.", "This shows that more complex MMT models are better at integrating visual information to perform better.", "The gains are much more prominent with entity masking, where the degradation occurs at a larger scale: Attentive MMT models show up to 4.2 METEOR improvement over NMT (Figure 1).", "We observed a large performance drop with incongruent decoding , suggesting that the visual modality is 2 Since entity masking uses Flickr30K splits (Section 3) rather than our splits, the scores are not comparable to those from other experiments in this paper.", "now much more important than previously demonstrated (Elliott, 2018).", "A comparison of attention maps produced by the baseline and masked MMT models reveals that the attention weights are more consistent in the latter.", "An interesting example is given in Figure 2 where the masked MMT model attends to the correct region of the image and successfully translates a dropped word that was otherwise a spelling mistake (son son g ).", "Czech and German.", "In order to understand whether the above observations are also consistent across different languages, we extend the entity masking experiments to German and Czech parts of Multi30K.", "Table 3 shows the gain of each MMT system with respect to the NMT model and the subsequent drop caused by incongruent decoding 3 .", "First, we see that the multimodal benefits clearly hold for German and Czech, although the gains are lower than for French 4 .", "Second, when we compute the average drop from using incongruent images across all languages, we see how conservative the INIT system is ( 4.7) compared 3 For example, the INIT system for French (Figure 1) surpasses the baseline (50.5) by reaching 53.9 (+3.4), which ends up at 47.4 ( 6.5) after incongruent decoding.", "to HIER ( 6.1) and DIRECT ( 6.8).", "This raises a follow-up question as to whether the hidden state initialization eventually loses its impact throughout the recurrence where, as a consequence, the only modality processed is the text.", "Finally, we discuss the results of the progressive masking experiments for French.", "Figure 3 clearly shows that as the sentences are progressively degraded, all MMT systems are able to leverage the visual modality.", "When the multimodal task becomes image captioning at k =0 , MMT models improve over the language-model counterpart by 7 METEOR.", "Further qualitative examples show that the systems perform surprisingly well by producing visually plausible sentences (see Table 5 and the Appendix).", "To get a sense of the visual sensitivity, we pick the DIRECT models trained on four degraded variants and perform incongruent decoding .", "We notice that as the amount of linguistic information increases, the gap narrows down: the MMT system gradually becomes less perplexed by the incongruence or, put in other words, less sensitive to the visual modality (Table 4).", "We then conduct a contrastive blinding experiment where the DIRECT models are not only fed with incongruent features at decoding time but also trained with them from scratch.", "The results suggest that the blinded models learn to ignore the visual modality.", "In fact, their performance is equivalent to NMT models.", "We presented an in-depth study on the potential contribution of images for multimodal machine translation.", "Specifically, we analysed the behavior of state-of-the-art MMT models under several degradation schemes in the Multi30K dataset, in order to reveal and understand the impact of textual predominance.", "Our results show that the models explored are able to integrate the visual modality if the available modalities are complementary rather than redundant.", "In the latter case, the primary modality (text) sufficient to accomplish the task.", "This dominance effect corroborates the seminal work of Colavita (1974) in Psychophysics where it has been demonstrated that visual stimuli dominate over the auditory stimuli when humans are asked to perform a simple audiovisual discrimination task.", "Our investigation using source degradation also suggests that visual grounding can in-crease the robustness of machine translation systems by mitigating input noise such as errors in the source text.", "In the future, we would like to devise models that can learn when and how to integrate multiple modalities by taking care of the complementary and redundant aspects of them in an intelligent way.", "This work is a follow-up on the research efforts conducted within the Grounded sequence-to-sequence transduction team of the JSALT 2018 Workshop.", "We would like to thank Jindrich Libovicky for contributing the hierarchical attention to nmtpytorch during the workshop.", "We also thank the reviewers for their valuable comments.", "Ozan Caglayan and Loc Barrault received funding from the French National Research Agency (ANR) through the CHIST-ERA M2CR project under the contract ANR-15-CHR2-0006-01.", "Lucia Specia and Pranava Madhyastha received funding from the MultiMT (H2020 ERC Starting Grant No. 678017) and MMVC (Newton Fund Institutional Links Grant, ID 352343575) projects." ]
[ "abstain", "objective", "method", "objective", "result", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "result", "objective", "abstain", "abstain", "abstain", "method", "result", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "objective", "result", "objective", "abstain", "abstain", "method", "method", "other", "other", "other", "other", "other" ]
[ "Online misogyny, a category of online abusive language, has serious and harmful social consequences.", "Automatic detection of misogynistic language online, while imperative, poses complicated challenges to both data gathering, data annotation, and bias mitigation, as this type of data is linguistically complex and diverse.", "This paper makes three contributions in this area: Firstly, we describe the detailed design of our iterative annotation process and codebook.", "Secondly, we present a comprehensive taxonomy of labels for annotating misogyny in natural written language, and finally, we introduce a high-quality dataset of annotated posts sampled from social media posts.", "Abusive language is a phenomenon with serious consequences for its victims, and misogyny is no exception.", "According to a 2017 report from Amnesty International, 23% of women from eight different countries have experienced online abuse or harassment at least once, and 41% of these said that on at least one occasion, these online experiences made them feel that their physical safety was threatened (Amnesty International, 2017).", "Automatic detection of abusive language can help identify and report harmful accounts and acts, and allows counter narratives (Chung et al., 2019; Garland et al., 2020; Ziems et al., 2020).", "Due to the volume of online text and the mental impact on humans who are employed to moderate online abusive language moderators of abusive online content have been shown to develop serious PTSD and depressive symptoms (Casey Newton, 2020) it is urgent to develop systems to automate the detection and moderation of online abusive language.", "Automatic detection, however, presents significant challenges (Vidgen et al., 2019).", "Abusive language is linguistically diverse (Vid-gen and Derczynski, 2020), both explicitly, in the form of swear words or profanities; implicitly, in the form of sarcasm or humor (Waseem et al., 2017); and subtly, in the form of attitudes and opinions.", "Recognizing distinctions between variants of misogyny is challenging for humans, let alone computers.", "Systems for automatic detection are usually created using labeled training data (Kiritchenko et al., 2020), hence, their performance depends on the quality and representativity of the available datasets and their labels.", "We currently lack transparent methods for how to create diverse datasets.", "When abusive language is annotated, classes are often created based on each unique dataset (a purely inductive approach), rather than taking advantage of general, established terminology from, for instance, social science or psychology (a deductive approach, building on existing research).", "This makes classification scores difficult to compare and apply across diverse training datasets.", "This paper investigates the research question: How might we design a comprehensive annotation process which results in high quality data for automatically detecting misogyny?", "We make three novel contributions: 1. Methodology: We describe our iterative approach to the annotation process in a transparent way which allows for a higher degree of comparability with similar research.", "2. Model: We present a taxonomy and annotation codebook grounded in previous research on automatic detection of misogyny as well as social science terminology.", "3. Dataset: We present a new, annotated corpus of Danish social media posts, Bajer, 1 annotated for misogyny, including analysis of class balance, word frequencies, Inter-Annotator Agreement (IAA), annotation errors, and classification baseline.", "1 https://github.com/phze22/ Online-Misogyny-in-Danish-Bajer Since research has indicated that misogyny presents differently across languages, and, likely, cultures (Anzovino et al., 2018), an additional contribution of this work is that it presents a dataset of misogyny in Danish , a North Germanic language, spoken by only six million people, and indeed the first work of its kind in any Scandina-vian/Nordic culture to our knowledge.", "In Denmark an increasing proportion of people refrain from online discourse due to the harsh tone, with 68% of social media users self-excluding in 2021 (Anal-yse & Tal, 2021; Andersen and Langberg, 2021), making this study contextually relevant.", "Further, the lack of language resources available for Danish (Kirkedal et al., 2019) coupled with its lexical complexity (Bleses et al., 2008) make it an intricate research objective for natural language processing.", "Abusive language is as ancient a phenomenon as written language itself.", "Written profanities and insults about others are found as old as graffiti on ruins from the Roman empire (Wallace, 2005).", "Automatic processing of abusive text is far more recent, early work including e.g. Davidson et al. (2017) and Waseem et al. (2017).", "Research in this field has produced both data, taxonomies, and methods for detecting and defining abuse, but there exists no objective framing for what constitutes abuse and what does not.", "In this work, we focus on a specific category of online abuse, namely misogyny .", "Misogyny can be categorised as a subbranch of hate speech and is described as hateful content targeting women (Waseem, 2016).", "The degree of toxicity depends on complicated subjective measures, for instance, the receiver's perception of the dialect of the speaker (Sap et al., 2019).", "Annotating misogyny typically requires more than a binary present/absent label.", "Chiril et al. (2020), for instance, use three categories to classify misogyny in French: direct sexist content (directly addressed to a woman or a group of women), descriptive sexist content (describing a woman or women in general) or reporting sexist content (a report of a sexism experience or a denunciation of a sexist behaviour).", "This categorization does not, however, specify the type of misogyny.", "from the work of Waseem and Hovy (2016).", "While harsh sexism (hateful or negative views of women) is the more recognized type of sexism, benevolent sexism (a subjectively positive view towards men or women), often exemplified as a compliment using a positive stereotypical picture, is still discriminating (Glick and Fiske, 1996).", "Other cat-egorisations of harassment towards women have distinguished between physical, sexual and indirect occurrences (Sharifirad and Jacovi, 2019).", "Anzovino et al. (2018) classify misogyny more segregated in five subcategories: Discredit , Harassment & Threats of Violence , Derailing , Stereotype & Objectification , and Dominance .", "They also distinguish between if the abuse is active or passive towards the target.", "These labels appear to apply well to other languages, and quantitative representation of labels differ by language.", "For example, Spanish shows a stronger presence of Dominance , Italian of Stereotype & Objectification , and English of Discredit .", "As we see variance across languages, building terminology for labeling misogyny correctly is therefore a key challenge in being able to detect it automatically.", "Parikh et al. (2019) take a multi-label approach to categorizing posts from the Everyday Sexism Project, where as many as 23 different categories are not mutually exclusive.", "The types of sexism identified in their dataset include body shaming , gaslighting , and mansplain-ing .", "While the categories of this work are extremely detailed and socially useful, several studies have demonstrated the challenge for human annotators to use labels that are intuitively unclear (Chatzakou et al., 2017; Vidgen et al., 2019) or closely related to each other (Founta et al., 2018).", "Guest et al. (2021) suggest a novel taxonomy for misogyny labeling applied to a corpus of primarily English Reddit posts.", "Based on previous research, including Anzovino et al. (2018), they present the following four overarching categories of misogyny:", "(i) Misogynistic Pejoratives,", "(ii) descriptions of Misogynistic Treatment,", "(iii) acts of Misogynistic Derogation and", "(iv) Gendered Personal attacks against women.", "The current work combines previous categorizations on misogyny into a taxonomy which is useful for annotation of misogyny in all languages, while being transparent about the construction of this taxonomy.", "Our work builds on the previous work presented in this section, continuous discussions among the annotators, and the addition of social science terminology to create a single-label taxonomy of misogyny as identified in Danish social media posts across various platforms.", "The creation of quality datasets involves a chain of methodological decisions.", "In this section, we will present the rationale of creating our dataset under three headlines: Dataset, Annotation process, and Mitigating biases.", "Bender and Friedman (2018) present a set of data statements for NLP which help alleviate issues related to exclusion and bias in language technology, lead[ing] to better precision in claims about how natural language processing research can generalize and thus better engineering results.", "Data statements are a characterization of a dataset which provides context to others to understand how experimental results might generalize and what biases might be reflected in systems built on the software.", "We present our data statements for the dataset creation in the following: Curation rationale: Random sampling of text often results in scarcity of examples of specifically misogynistic content (e.g. (Wulczyn et al., 2017; Founta et al., 2018)).", "Therefore, we used the common alternative of collecting data by using pre-defined keywords with a potentially high search hit (e.g. Waseem and Hovy (2016)), and identifying relevant user-profiles (e.g. (Anzovino et al., 2018)) and related topics (e.g. (Kumar et al., 2018)).", "We searched for keyword (specific slurs, hash-tags), that are known to occur in sexist posts.", "These were defined by previous work, a slur list from Reddit, and from interviews and surveys of online misogyny among women.", "We also searched for broader terms like sex or women, which do not appear exclusively in a misogynistic context, for example in the topic search, where we gathered relevant posts and their comments from the social media pages of public media.", "A complete list of keywords can be found in the appendix.", "Social media provides a potentially biased, but broad snapshot of online human discourse, with plenty of language and behaviours represented.", "Following best practice guidelines (Vidgen and Derczynski, 2020), we sampled from a language for which there are no existing annotations of the target phenomenon: Danish.", "Different social media platforms attract different user groups and can exhibit domain-specific language (Karan and Snajder, 2018).", "Rather than choosing one platform (existing misogyny datasets are primarily based on Twitter and Reddit (Guest et al., 2021)), we sampled from multiple platforms: Statista (2020) shows that the platform where most Danish users are present is Facebook, followed by Twitter, YouTube, Instagram and lastly, Reddit.", "The dataset was sampled from Twitter, Facebook and Reddit posts as plain text.", "Text characteristics: Danish colloquial web speech.", "Posts, comments, retweets: max.", "length 512, average length: 161 characters.", "Annotator demographics: We recruited annotators aiming specifically for diversity in gender, age, occupation/ background (linguistic and ethnographic knowledge), region (spoken dialects) as well as an additional facilitator with a background in ethnography to lead initial discussions (see Table 1).", "Annotators were appointed as full-time employees with full standard benefits.", "In annotating our dataset, we built on the MATTER framework (Pustejovsky and Stubbs, 2012) and use the variation presented by Finlayson and Erjavec (2017) (the MALER framework), where the Train", "We created a set of guidelines for the annotators.", "The annotators were first asked to read the guidelines and individually annotate about 150 different posts, after which there was a shared discussion.", "After this pilot round, the volume of samples per annotator was increased and every sample labeled by 2-3 annotators.", "When instances were flagged' or annotators disagreed on them, they were discussed during weekly meetings, and misunderstandings were resolved together with the external facilitator.", "After round three, when reaching 7k annotated posts (Figure 2), we continued with independent annotations maintaining a 15% instance overlap between randomly picked annotator pairs.", "Management of annotator disagreement is an important part of the process design.", "Disagreements can be solved by majority voting (Davidson et al., 2017; Wiegand et al., 2019), labeled as abuse if at least one annotator has labeled it (Golbeck et al., 2017) or by a third objective instance (Gao and Huang, 2017).", "Most datasets use crowdsourcing platforms or a few academic experts for annotation (Vidgen and Derczynski, 2020).", "Inter-annotator-agreement (IAA) and classification performance are established as two grounded evaluation measurements for annotation quality (Vidgen and Derczynski, 2020).", "Comparing the performance of amateur annotators (while providing guidelines) with expert annotators for sexism and racism annotation, Waseem (2016) show that the quality of amateur annotators is competitive with expert annotations when several amateurs agree.", "Facing the trade-off between training annotators intensely and the number of involved annotators, we continued with the trained annotators and group discussions/ individual revisions for flagged content and disagreements (Section 5.4).", "Prior work demonstrates that biases in datasets can occur through the training and selection of annotators or selection of posts to annotate (Geva et al., 2019; Wiegand et al., 2019; Sap et al., 2019; Al Kuwatly et al., 2020; Ousidhoum et al., 2020).", "Selection biases: Selection biases for abusive language can be seen in the sampling of text, for instance when using keyword search (Wiegand et al., 2019), topic dependency (Ousidhoum et al., 2020), users (Wiegand et al., 2019), domain (Wiegand et al., 2019), time (Florio et al., 2020) and lack of linguistic variety (Vidgen and Derczynski, 2020).", "Label biases: Label biases can be caused by, for instance, non-representative annotator selection, lack in training/domain expertise, preconceived notions, or pre-held stereotypes.", "These biases are treated in relation to abusive language datasets by several sources, e.g. general sampling and annotators biases (Waseem, 2016; Al Kuwatly et al., 2020), biases towards minority identity mentions based for example on gender or race (Davidson et al., 2017; Dixon et al., 2018; Park et al., 2018; Davidson et al., 2019), and political annotator biases (Wich et al., 2020).", "Other qualitative biases comprise, for instance, demographic bias, over-generalization, topic exposure as social biases (Hovy and Spruit, 2016).", "Systematic measurement of biases in datasets remains an open research problem.", "Friedman and Nissenbaum (1996) discuss freedom from biases as an ideal for good computer systems, and state that methods applied during data creation influence the quality of the resulting dataset quality with which systems are later trained.", "Shah et al. (2020) showed that half of biases are caused by the methodology design, and presented a first approach of classifying a broad range of predictive biases under one umbrella in NLP.", "We applied several measures to mitigate biases occurring through the annotation design and execution: First, we selected labels grounded in existing, peer-reviewed research from more than one field.", "Second, we aimed for diversity in annotator profiles in terms of age, gender, dialect, and background.", "Third, we recruited a facilitator with a background in ethnographic studies and provided intense annotator training.", "Fourth, we engaged in weekly group discussions, iteratively improving the codebook and integrating edge cases.", "Fifth, the selection of platforms from which we sampled data is based on local user representation in Denmark, rather than convenience.", "Sixth, diverse sampling methods for data collection reduced selection biases.", "Good language taxonomies systematically bring together definitions and describe general principles of each definition.", "The purpose is categorizing reference lang.", "and mapping entities in a way that demonstrates their natural relationship, e.g. Schmidt and Wiegand (2017); Anzovino et al. (2018); Zampieri et al. (2019); Banko et al. (2020).", "Their application is especially clear in shared tasks, as for multilingual sexism detection against women, SemEval 2019 (Basile et al., 2019).", "On one hand, it should be an aim of a taxonomy that it is easily understandable and applicable for annotators from various background and with different expertise levels.", "On the other hand, a taxonomy is only useful if it is also correct and comprehensive , i.e. a good representation of the world.", "Therefore, we have aimed to integrate definitions from several sources of previous research (deductive approach) as well as categories resulting from discussions of the concrete data (inductive approach).", "Our taxonomy for misogyny is the product of", "(a) existing research in online abusive language and misogyny (specifically the work in Table 2),", "(b) a review of misogyny in the context of online platforms and online platforms in a Danish context", "(c) iterative adjustments during the process including discussions between the authors and annotators.", "The labeling scheme (Figure 1) is the main structure for guidelines for the annotators, while a codebook ensured common understanding of the label descriptions.", "The codebook provided the annotators with definitions from the combined taxonomies.", "The descriptions were adjusted to distinguish edge-cases during the weekly discussion rounds.", "The taxonomy has four levels: (1) Abusive (abusive/not abusive), (2) Target (indi-vidual/group/others/untargeted), (3) Group type (racism/misogyny/others), (4) Misogyny type (harassment/discredit/stereotype & objectifica-tion/dominance/neosexism/benevolent).", "To demonstrate the relationship of misogyny to other instances of abusive language, our taxonomy embeds misogyny as a subcategory of abusive language.", "Misogyny is distinguished from, for instance, personal attacks, which is closer to the abusive language of cyberbullying .", "For definitions and examples from the dataset to the categories, see Appendix A.1.", "We build on the taxonomy suggested in Zampieri et al. (2019), which has been applied to datasets in several languages as well as in SemEval (Zampieri et al., 2020).", "While Parikh et al. (2019) provide a rich collection of sexism categories, multiple, overlapping labels do not fulfill the purpose of being easily understandable and applicable for annotators.", "The taxonomies in Anzovino et al. (2018) and Jha and Mamidi (2017) have proved their application to English, Italian and Spanish, and offer more general labels.", "Some labels from previous work were removed from the labeling scheme during the weekly discussions among authors and annotators, (for instance derailing ), because no instances of them were found in the data.", "During our analysis of misogyny in the Danish context", "(b), we became aware of the term neosex-ism.", "Neosexism is a concept defined in Tougas et al. (1999), and presents as the belief that women have already achieved equality, and that discrimination of women does not exist .", "Neosexism is based on covert sexist beliefs, which can go unnoticed, disappearing into the cultural norms. Those who consider themselves supporters of women's rights may maintain non-traditional gender roles, but also exhibit subtle sexist beliefs", "(Martinez et al., 2010).", "Sexism in Denmark appear to correlate with the modern sexism scale", "(Skewes et al., 2019; Tougas et al., 1995; Swim et al., 1995; Campbell et al., 1997).", "Neosexism was added to the taxonomy before annotation began, and as we will see in the analysis section, neosexism was the most common not sexual harassment misogyny neosexism discredit stereotypes & objectification benevolent sexism dominance disgrace/ humiliate women", "(Figure 1).", "Here follow some examples of neosexism from our dataset: Resenting complaints about discrimination: I often feel that people have treated me better and spoken nicer to me because I was a girl, so I have a hard time taking it seriously when people think that women are so discriminated against in the Western world. Questioning the existence of discrimination: Can you point to research showing that childbirth is the reason why mothers miss out on promotions? Presenting men as victims: Classic. If it's a disadvantage for women it's the fault of society. If men, then it must be their own. Sexism thrives on the feminist wing.", "Neosexism is an implicit form of misogyny, which is reflected in annotation challenges summarised in section 5.5.", "In prior taxonomies, instances of neosexism would most likely have been assigned to the implicit appearances of misogynistic treatment", "(ii)", "(Guest et al., 2021)", "or perhaps not classified as misogyny at all.", "Neosexism is most closely related to the definition disrespectful actions, suggesting or stating that women should be controlled in some way, especially by men.", "This definition, however, does not describe the direct denial that misogyny exists.", "Without a distinct and explicit neosexism category, however, these phenomena may be mixed up or even ignored.", "taxonomies in abusive language while integrating context-related occurrences.", "A similar idea is demonstrated in Mulki and Ghanem", "(2021), adding damning as an occurrence of misogyny in an Arabic context.", "While most of previous research is done in English, these language-specific findings highlight the need for taxonomies that are flexible to different contexts, i.e. they are good representations of the world .", "Lastly, from an NLP point of view, languages with less resources for training data can profit further from transfer learning with similar labels, as demonstrated in Pamungkas et al.", "(2020)", "for misogyny detection.", "The final dataset contains 27.9K comments, of which 7.5K contain abusive language.", "Misogynistic posts comprise 7% of overall posts.", "Neosexism is by far the most frequently represented class with 1.3K tagged posts, while Discredit and Stereotype & objectification are present in 0.3K and 0.2K posts.", "Benevolent , Dominance , and Harrassment are tagged in between only 45 and 70 posts.", "Most posts tagged as abusive and/or containing misogyny are retrieved from searches on posts from public media profiles, see Table 3. Facebook and Twitter are equally represented, while Reddit is in the minority.", "Reddit posts were sampled from an available historical collection.", "and comments to these posts; keyw.", "= keyword/hashtag-search; popul.", "= most interactions.", "Frequencies of the words; kvinder'", "( women )", "and mnd'", "( men )", "were the highest, but these words did not represent strong polarities towards abusive and misogynistic content", "(Table 4).", "The word user' represents de-identified references to discussion participants", "(@USER).", "5.4 Inter-Annotator Agreement", "(IAA)", "We measure IAA using the agreement between 3 annotators for each instance until round 3", "(7k posts), and then sub-sampled data overlaps between 2 annotators.", "IAA is calculated through average label agreement at post level for example if two annotators label two posts [abusive, untargeted] and [abusive, group targeted] the agreement would be 0.5.", "Our IAA during iterations of dataset construction ranged between 0.5 and 0.71.", "In the penultimate annotation round we saw a drop in agreement", "(Figure 2); this is attributed to a change in underlying text genre, moving to longer Reddit posts.", "25% of disagreements about classifications were solved during discussions.", "Annotators had the opportunity to adjust their disagreed annotation in the first revision individually, which represents the remaining 75%", "(Table 5).", "The majority of disagreements were on subtask A, deciding whether the post was abusive or not.", "The final overall Fleiss' Kappa", "(Fleiss", "(1971))", "for individual subtasks are: abusive/not: 0.58, targeted: 0.54, misogyny/not: 0.54.", "It is notable here that the dataset is significantly more skewed than prior work which upsampled to 1:1 class balances.", "Chance-corrected measurements are sensitive to agreement on rare categories and higher agreement is needed to reach reliability, as shown in Artstein and Poesio", "(2008).", "Based on the discussion rounds, the following types of posts were the most challenging to annotate:", "1. Interpretation of the author's intention", "(irony, sarcasm, jokes, and questions)", "E.g. Haha!", "Virksomheder i Danmark: Vi anstter aldrig en kvinde igen...", "(Haha! Companies in Denmark: We will never hire a woman again ...)", "sexisme og seksuelt frisind er da vist ikke det samme?", "(I don't believe sexism and sexual liberalism are the same?)", "2. Degree of abuse: Misrepresenting the truth to harm the subject or fact E.g. Han er en stor lgner", "(He is a big liar)", "3. Hashtags: Meaning and usage of hashtags in relation to the context E.g. #nometoo 4. World knowledge required: Du siger at Frank bruger sin magt forkert men du bruger din til at brnde sa mange mnd pa balet ...", "(You say that Frank uses his power wrongly, but you use yours to throw so many men on the fire ... referring to a specific political topic.)", "5. Quotes: re-posting or re-tweeting a quote gives limited information about the support or denial of the author 6. Jargon: receiver's perception I skal alle have et klap i masen herfra", "(You all get a pat on the behind from me)", "Handling these was an iterative process of raising cases for revision in the discussion rounds, formulating the issue, and providing documentation.", "We added the status and, where applicable, outcome from these cases to the guidelines.", "We also added explanations of hashtags and definitions of unclear identities, like the media, as a company.", "For quotes without declaration of rejection or support, we agreed to label them as not abusive, since the motivation of re-posting is not clear.", "Lastly, we provide a classification baseline: For misogyny and abusive language, the BERT model from Devlin et al.", "(2019)", "proved to be a robust architecture for cross-domain", "(Swamy et al., 2019)", "and cross-lingual", "(Pamungkas et al., 2020; Mulki and Ghanem, 2021)", "transfer.", "We use therefore multilingual BERT", "('bert-base-multilingual-un cased')", "for general language understanding in Danish, fine-tuned on our dataset.", "Model: We follow the suggested parameters from Mosbach et al.", "(2020)", "for fine-tuning", "(learn-ing rate 2e-5, weight decay 0.01, AdamW optimizer without bias correction).", "Class imbalance is handled by weighted sampling and data split for train/test 80/20.", "Experiments are conducted with batch size 32 using Tesla V100 GPU.", "Preprocessing: Our initial pre-processing of the unstrucutured posts included converting emojis to text, url replacement, limit @USER and punctuation occurrences and adding special tokens for upper case letters adopted from Ahn et al.", "(2020).", "Classification: Since the effect of applying multi-task-learning might not conditionally improve performance", "(Mulki and Ghanem, 2021), the classification is evaluated on a subset of the dataset for each subtask", "(see Table 6)", "including all posts of the target label", "(e.g. misogyny)", "and stratified sampling of the non-target classes", "(e.g. for non-misogynistic: abusive and non-abusive posts)", "with 10k posts for each experiment.", "Results are reported when the model reached stabilized per class f1 scores for all classes on the test set", "( 0.01/20).", "The results indicate the expected challenge of accurately predicting less-represented classes and generalizing to unseen data.", "Analysing False Positives and False Negatives on the misogyny detection task, we cannot recognise noticeable correlations with other abusive forms and disagreements/ difficult cases from the annotation task.", "The goal was to ensure, first, a suf-ficient amount of misogynistic content and, secondly, mitigation of biases stemming from a uniform dataset.", "Surprisingly, topic sampling unearthed a higher density of misogynistic content than targeted keyword search", "(Table 3).", "While researching platforms, we noticed the limited presence of Danish for publicly available men-dominated fora", "(e.g. gaming forums such as DotA2 and extremist plaftorms such as Gab", "(Kennedy et al., 2018)).", "This, as well as limitations of platform APIs caused a narrow data selection.", "Often, non-privileged languages can gain from cross-language transfer learning.", "We experimented with translating misogynistic posts from Fersini et al.", "(2018)", "to Danish, using translation services, and thereby augment the minority class data.", "Translation services did not provide a sampling alternative.", "Additionally, as discovered by Anzovino et al.", "(2018), misogynistic content seems to vary with culture.", "This makes total text corrected label corrected out 960 877 224 48 Table 7: Translating IberEval posts EN to DA language-specific investigations important, both for the sake of quality of automatic detection systems, as well as for cultural discovery and investigation.", "Table 7 shows results of post-translation manual correction by annotators", "(all fluent in English).", "Reflections on annotation process Using just seven annotators has the disadvantage that one is unlikely to achieve as broad a range of annotator profiles as, for instance, through crowdsourcing.", "However, during annotation and weekly discussions, we saw clear benefits from having a small annotator group with different backgrounds and intense training.", "While annotation quality cannot be measured by IAA alone, the time for debate clar-ified taxonomy items, gave thorough guidelines, and increased the likelihood of correct annotations.", "The latter reflects the quality of the final dataset, while the former two indicate that the taxonomy and codebook are likely useful for other researchers analysing and processing online misogyny.", "The semi-open development of the taxonomy and frequent discussions allowed the detection neosexism as an implicit form of misogyny.", "Future research in taxonomies of misogyny could consider including distinctions between active/passive misogyny, as suggested by Anzovino et al.", "(2018)", "as well as other sub-phenomena.", "In the resulting dataset, we saw a strong representation of neosexism .", "Whether this is a specific cultural phenomenon for Danish, or indicative of general online behaviour, is not clear.", "The use of unified taxonomies in research affords the possibility to test the codebook guidelines iteratively.", "We include a short version of the guidelines in the appendix; the original document consists of seventeen pages.", "In a feedback survey following the annotation work, most of the annotators described that during the process, they used the guidelines primarily for revision in case they felt unsure how to label the post.", "To make the annotation more intuitively clear for annotators, we suggest reconsidering documentation tools and their accessibility for annotators.", "Guidelines are crucial for handling linguistic challenges, and well-documented decisions about them serve to create comparable research on detecting online misogyny across languages and dataset.", "In this work, we have documented the construction of a dataset for training systems for automatic detection of online misogyny.", "We also present the resulting dataset of misogyny in Danish social media, Bajer, including class balance, word counts, and baseline as an indicator.", "This dataset is available for research purposes upon request.", "The objective of this research was to explore the design of an annotation process which would result in a high quality dataset, and which was transparent and useful for other researchers.", "Our approach was to recruit and train a diverse group of annotators and build a taxonomy and codebook through collaborative and iterative annotator-involved discussions.", "The annotators reached good agreement, indicating that the taxonomy and codebook were understandable and useful.", "However, to rigorously evaluate the quality of the dataset and the performance of models that build on it, the models should be evaluated in practice with different text types and languages, as well as compared and combined with models trained on different datasets, i.e. Guest et al.", "(2021).", "Because online misogyny is a sensitive and precarious subject, we also propose that the performance of automatic detection models should be evaluated with use of qualitative methods", "(Inie and Derczynski, 2021), bringing humans into the loop.", "As we found through our continuous discussions, online abuse can present in surprising forms, for instance the denial that misogyny exists.", "The necessary integration of knowledge and concepts from relevant fields, e.g. social science, into NLP research is only really possible through thorough human participation and discussion.", "This research was supported by the IT University of Copenhagen, Computer Science for internal funding on Abusive Language Detection; and the Independent Research Fund Denmark under project 9131-00131B, Verif-AI.", "We thank our annotators Nina Schler Nrgaard, Tamana Saidi, Jonas Joachim Kofoed, Freja Birk, Cecilia Andersen, Ul-rik Dolzyk, Im Sofie Skak and Rania M. Tawfik.", "We are also grateful for discussions with Debora Nozza, Elisabetta Fersini and Tracie Farrell.", "Usernames and discussion participant/author names are replaced with a token @USER value.", "Annotators were presented with the text of the post and no author information.", "Posts that could not be interpreted by annotators because of missing background information were excluded.", "We only gathered public posts.", "All further information about dataset creation is included in the main body of the paper above." ]
[ "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "objective", "objective", "method", "objective", "other", "abstain", "abstain", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "other", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "other", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "objective", "abstain", "result", "abstain", "other", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain" ]
[ "Understanding contrastive opinions is a key component of argument generation.", "Central to an argument is the claim, a statement that is in dispute.", "Generating a counter-argument then requires generating a response in contrast to the main claim of the original argument.", "To generate contrastive claims, we create a corpus of Reddit comment pairs self-labeled by posters using the acronym FTFY (fixed that for you).", "We then train neural models on these pairs to edit the original claim and produce a new claim with a different view.", "We demonstrate significant improvement over a sequence-to-sequence baseline in BLEU score and a human evaluation for fluency, coherence, and contrast.", "In the Toulmin model (1958), often used in computational argumentation research, the center of the argument is the claim, a statement that is in dispute (Govier, 2010).", "In recent years, there has been increased interest in argument generation (Bilu and Slonim, 2016; Hua and Wang, 2018; Le et al., 2018).", "Given an argument, a system that generates counter-arguments would need to 1) identify the claims to refute, 2) generate a new claim with a different view, and 3) find supporting evidence for the new claim.", "We focus on this second task, which requires an understanding of contrast.", "A system that can generate claims with different views is a step closer to understanding and generating arguments (Apothloz et al., 1993).", "We build on previous work in automated claim generation (Bilu et al., 2015) which examined generating opposing claims via explicit negation.", "However, researchers also noted that not every claim has an exact opposite.", "Consider a claim from Reddit: Get employers out of the business, pass universal single-payer healthcare .", "This is an example of a policy claim a view on what should be done (Schiappa and Nordin, 2013).", "While negation of this claim is a plausible response (e.g. asserting there should be no change by stating Do not get employers out of the business, do not pass universal healthcare ), negation limits the diversity of responses that can lead to a productive dialogue.", "Instead, consider a response that provides an alternative suggestion: Get employers out of the business, deregulate and allow cross-state competition .", "(2) In Example 1, the speaker believes in an increased role for government while in Example 2, the speaker believes in a decreased one.", "As these views are on different sides of the political spectrum, it is unlikely that a single speaker would utter both claims.", "In related work, de Marneffe et al. (2008) define two sentences as contradictory when they are extremely unlikely to be true simultaneously.", "We thus define a contrastive claim as one that is likely to be contradictory if made by the speaker of the original claim .", "Our goal, then, is to develop a method for generating contrastive claims when explicit negation is not the best option.", "Generating claims in this way also has the benefit of providing new content that can be used for retrieving or generating supporting evidence.", "In order to make progress towards generating contrastive responses, we need large, high-quality datasets that illustrate this phenomenon.", "We construct a dataset of 1,083,520 contrastive comment pairs drawn from Reddit using a predictive model to filter out non-contrastive claims.", "Each pair contains very similar, partially aligned text but the responder has significantly modified the original post.", "We use this dataset to model differences in views and generate a new claim given an original comment.", "The similarity within these pairs allows us to use them as distantly labeled contrastive word alignments.", "The word alignments provide semantic information about which words and phrases can be substituted in context in a coherent, meaningful way.", "1. Methods and data for contrastive claim identification to mine comment pairs from Reddit, resulting in a large, continuously growing dataset of 1,083,520 distant-labeled examples.", "2. A crowd-labeled set of 2,625 comments each paired with 5 new contrastive responses generated by additional annotators.", "3. Models for generating contrastive claims using neural sequence models and constrained decoding.", "In the following sections, we describe the task methodology and data collection and processing.", "Next, we present neural models for contrastive claim generation and evaluate our work and present an error analysis.", "We then discuss related work in contrast/contradiction, argumentation, and generation before concluding.", "Previous work in claim generation (Bilu et al., 2015) focused on explicit negation to provide opposing claims.", "While negation plays an important role in argumentation (Apothloz et al., 1993), researchers found that explicit negation may result in incoherent responses (Bilu et al., 2015).", "Furthermore, recent empirical studies have shown that arguments that provide new content (Wachsmuth et al., 2018) tend to be more effective.", "While new concepts can be introduced in other ways by find-ing semantically relevant content, we may find it desirable to explicitly model contrast in order to control the output of the model as part of a rhetorical strategy, e.g. concessions (Musi, 2018).", "We thus develop a model that generates a contrastive claim given an input claim.", "Contrastive claims may differ in more than just viewpoint; they may also contain stylistic differences and paraphrases, among other aspects.", "We thus propose to model contrastive claims by controlling for context and maintaining the same text 1 Data and code available at github.com/chridey/fixedthat between pairs of contrastive claims except for the contrastive word or phrase.", "Much of the previous work in contrast and contradiction has examined the relationship between words or sentences.", "In order to understand when words and phrases are contrastive in argumentation, we need to examine them in context .", "For example, consider the claim Hillary Clinton should be president.", "A reasonable contrastive claim might be Bernie Sanders should be president.", "(rather than the explicit negation Hillary Clinton should not be president. )", "In this context, Hillary Clinton and Bernie Sanders are contrastive entities as they were both running for president.", "However, for the claim Hillary Clinton was the most accomplished Secretary of State in recent memory.", "they would be unrelated.", "Consider also that we could generate the claim Hillary Clinton should be senator.", "This contrastive claim is not coherent given the context.", "Generating a contrastive claim then requires 1) identifying the correct substitution span and 2) generating a response with semantically relevant replacements.", "While some contrastive claims are not coherent, there are often multiple plausible responses, similar to tasks such as dialogue generation.", "For example, Donald Trump should be president is just as appropriate as Bernie Sanders should be president .", "We thus treat this as a dialogue generation task where the goal is to generate a plausible response given an input context.", "We obtain training data by scraping the social media site Reddit for comments containing the acronym FTFY .", "2 FTFY is a common acronym meaning fixed that for you. 3 FTFY responses (hereafter FTFY) are used to respond to another comment by editing part of the parent comment (hereafter parent).", "Most commonly, FTFY is used for three categories of responses: 1) expressing a contrastive claim (e.g. the parent is Bernie Sanders for president and the FTFY is Hillary Clinton should be president ) which may be sarcastic (e.g. Ted Cruz for president becomes Zodiac 2 https://en.wiktionary.org/wiki/FTFY 3 https://reddit.zendesk.com/hc/en-us/articles/205173295-What-do-all-these-acronyms-mean killer for president ) 2) making a joke (e.g. This Python library really piques my interest vs. This really *py*ques my interest ), and 3) correcting a typo (e.g. This peaks my interest vs. piques ).", "In Section 3.2, we describe how we identify category 1 (contrastive claims) for modeling.", "To obtain historical Reddit data, we mined comments from the site pushshift.io for December 2008 through October 2017.", "This results in 2,200,258 pairs from Reddit, where a pair consists of a parent and an FTFY.", "We find that many of the top occurring subreddits are ones where we would expect strong opinions (/r/politics, /r/worldnews, and /r/gaming).", "To filter the data to only the type of response that we are interested in, we annotated comment pairs for contrastive claims and other types.", "We use our definition of contrastive claims based on contradiction, where both the parent and FTFY are a claim and they are unlikely to be beliefs held by the same speaker.", "A joke is a response that does not meaningfully contrast with the parent and commonly takes the form of a pun, rhyme, or oronym.", "A correction is a response to a typo, which may be a spelling or grammatical error.", "Any other pair is labeled as other, including pairs where the parent is not a claim.", "In order to identify contrastive claims, we selected a random subset of the Reddit data from prior to September 2017 and annotated 1993 comments.", "Annotators were native speakers of English and the Inter-Annotator Agreement using Kripen-dorff's alpha was 0.72.", "Contrast occurs in slightly more than half of the sampled cases (51.4%), with jokes (23.0%) and corrections (21.2%) comprising about one quarter each.", "We then train a binary classifier to predict contrastive claims, thus enabling better quality data for the generation task.", "To identify the sentence in the parent that the FTFY responds to and derive features for classification, we use an edit distance metric to obtain sentence and word alignments between the parent comment and response.", "As the words in the parent and response are mostly in the same order and most FTFYs contain significant overlap with the parent response, it is possible to find alignments by moving a sliding window over the parent.", "A sample of 100 comments verifies that this approach yields exact word alignments in 75 comments and exact sentence alignments in 93.", "Given these pairs of comments, we derive linguistic and structural features for training a binary classifier.", "For each pair of comments, we compute features for the words in the entire comment span and features from the aligned phrases span only (as identified by edit distance).", "From the aligned phrases we compute the character edit distance and character Jaccard similarity (both normalized by the number of characters) to attempt to capture jokes and typos (the similarity should be high if the FTFY is inventing an oronym or correcting a spelling error).", "From the entire comment , we use the percentage of characters copied as a low percentage may indicate a poor alignment and the percentage of non-ASCII characters as many of the jokes use emojis or upside-down text.", "In addition, we use features from GloVe (Pennington et al., 2014) word embeddings 4 for both the entire comment and aligned phrases .", "We include the percentage of words in the embedding vocabulary for both spans for both the parent and FTFY.", "The reason for this feature is to identify infrequent words which may be typos or jokes.", "We compute the cosine similarity of the average word embeddings between the parent and FTFY for both spans.", "Finally, we use average word embeddings for both spans for both parent and FTFY.", "As we want to model the generation of new content, not explicit negation, we removed any pairs where the difference was only stop words.", "The set of stop words includes all the default stop words in Spacy 5 combined with expletives and special tokens (we replaced all URLs and user-names).", "We trained a logistic regression classifier and evaluated using 4-fold cross-validation.", "We compare to a character overlap baseline where any examples with Jaccard similarity > 0 .", "9 and edit distance < 0 .", "15 were classified as non-contrastive.", "The goal of this baseline is to illustrate how much of the non-contrastive data involves simple or nonexistent substitutions.", "Results are shown in Table", "1. Our model obtains an F-score of 80.25 for an 8 point absolute improvement over the baseline.", "After using the trained model to classify the remaining data, we have 1,083,797 Reddit pairs.", "We set aside 10,307 pairs from October 1-20, 2017 for 4 We found the 50-dimensional Wikipedia+Gigaword embeddings to be sufficient 5 spacy.io Model Precision Recall F-score Majority 51.4 100 67.5 Baseline 67.75 77.19 72.16 LR 74.22 87.60 80.25 Table 1: Results of Identifying Contastive Claims development and October 21-30 for test (6,773), with the remainder used for training.", "As we are primarily working with sentences, the mean parent length was 16.3 and FTFY length was 14.3.", "The resulting test FTFYs are naturally occurring and so do not suffer from annotation artifacts.", "At the same time, they are noisy and may not reflect the desired phenomenon.", "Thus, we also conducted an experiment on Amazon Mechanical Turk 6 (AMT) to obtain additional gold references, which are further required by metrics such as BLEU (Papineni et al., 2002).", "We selected 2,625 pairs from the 10 most frequent categories 7 (see Table 2).", "These categories form a three-level hierarchy for each subreddit and we use the second-level, e.g. for /r/pokemongo the categories are Pokemon, Video Games, and Gaming so we use Video Games.", "Before participating, each annotator was required to pass a qualification test five questions to gauge their knowledge of that topic.", "For the movies category, one question we asked was whether for the sentence Steven Spielberg is the greatest director of all time , we could instead use Stanley Kubrick or Paul McCartney .", "If they passed this test, the annotators were then given the parent comment and keywords (the subreddit and three category levels) to provide additional context.", "We obtained five new FTFYs for each parent and validated them manually to remove obvious spam or trivial negation (e.g. not or can't).", "Our goal of generating contrastive claims can be broken down into two primary tasks: 1) identifying the words in the original comment that should be removed or replaced and 2) generating the appropriate substitutions and any necessary context.", "Initially, we thus experimented with a modular approach by tagging each word in the parent and then using the model predictions to determine if we should copy, delete, or replace a segment with a new word or phrase.", "We tried the bi-directional LSTM-CNN-CRF model of Ma and Hovy (2016) and used our edit distance word alignments to obtain labels for copying, deleting, or replacing.", "However, we found this model performed slightly above random predictions, and with error propagation, the model is unlikely to produce fluent and accurate output.", "Instead, we use an end-to-end approach using techniques from machine translation.", "We use neural sequence-to-sequence encoder-decoder models (Sutskever et al., 2014) with attention for our experiments.", "The tokens from the parent are passed as input to a bi-directional GRU (Cho et al., 2014) to obtain a sequence of encoder hidden states h i .", "Our decoder is also a GRU, which at time t generates a hidden state s t from the previous hidden state s t 1 along with the input.", "When training, the input x t is computed from the previous word in the gold training data if we are in teacher forcing mode (Williams and Zipser, 1989) and otherwise is the prediction made by the model at the previous time step.", "When testing, we also use the model predictions.", "The input word w t may be augmented by additional features, as discussed in Section 4.2.", "In the baseline scenario x t = e ( w t ) where e is an embedding.", "The hidden state s t is then combined with a context vector h t , which is a weighted combination of the encoder hidden states using an attention mechanism: h t = (cid:88) i it h i To calculate ti , we use the attention of Luong et al. (2015) as this encourages the model to select features in the encoder hidden state which correlate with the decoder hidden state, which we want because our input and output are similar.", "Our experiments on the development data verified this, as Bahdanau attention (Bahdanau et al., 2015) performed worse.", "Attention is then calculated as: it = exp( h Ti s t ) (cid:80) s (cid:48) exp( h Ts (cid:48) s t ) Finally, we make a prediction of a vocabulary word w by using features from the context and decoder hidden state with a projection matrix W and output vocabulary matrix V : P ( w ) = softmax ( V tanh( W [ s t ; h t ] + b w ) + b v ) We explored using a copy mechanism (See et al., 2017) for word prediction but found it difficult to prevent the model from copying the entire input.", "Decoder Input: We evaluate two representations of the target input: as a sequence of words and as a sequence of edits .", "The sequence of words approach is the standard encoder-decoder setup.", "For the example parent Hillary Clinton for president 2020 and FTFY Bernie Sanders for president we would use the FTFY without modification.", "Schmaltz et al. (2017) found success modeling error correction using sequence-to-sequence models by representing the target input as a sequence of edits.", "We apply a similar approach to our problem, generating a target sequence by following the best path in the matrix created by the edit distance algorithm.", "The new target sequence is the original parent interleaved with DELETE-N to-kens that specify how many previous words to delete, followed by the newly generated content.", "For the same example, Hillary Clinton for president 2020 , the modified target sequence would be Hillary Clinton DELETE-2 Bernie Sanders for president 2020 DELETE-1 .", "Counter: Kikuchi et al. (2016) found that by using an embedding for a length variable they were able to control output length via a learned mechanism.", "In our work, we compute a counter variable which is initially set to the number of new content words the model should generate.", "During decoding, the counter is decremented if a word is generated that is not in the source input ( I ) or in the set of stop words ( S ) defined in Section 3.2.", "The model uses an embedding e ( c t ) for each count, which is parameterized by a count embedding matrix.", "The input to the decoder state in this scenario is x t = e ( w t , c t ) .", "At each time step, the count is computed by: c 0 = | O \\ ( S I ) | or desired count c t +1 = (cid:40) c t 1 , w t / S I and c t > 0 c t , otherwise where O is the set of gold output words in training.", "For the parent comment Hillary Clinton for president 2020 and FTFY Bernie Sanders for president , the decoder input is presented, with the time t in the first row of Table 3 and the inputs w t and c t in the second and third rows, respectively.", "At the start of decoding, the model expects to generate two new content words, which in this example it generates immediately and decrements the counter.", "When the counter reaches 0, it only generates stop or input words.", "Unlike the controlled-length scenario, at test time we do not know the number of new content words to generate.", "However, the count for most FTFYs is between 1 and 5, inclusive, so we can exhaustively search this range during decoding.", "We experimented with predicting the count but found it to be inaccurate so we leave this for future work.", "Subreddit Information: As the model often needs to disambiguate polysemous words, additional context can be useful.", "Consider the parent comment this is a strange bug.", "In a programming subreddit, a sarcastic FTFY might be this is a strange feature.", "However, in a Pokemon subreddit, an FTFY might be this is a strange dinosaur in an argument over whether Armaldo is a bug or a dinosaur.", "We thus include additional features to be passed to the encoder at each time step, in the form of an embedding g for each the three category levels obtained in Section 3.3.", "These embeddings are concatenated to the input word w t at each timestep, i.e. x t = e ( w t , g 1 t , g 2 t , g 3 t ) .", "We use a negative log likelihood objective function LNLL = log (cid:80) t 1: TP ( w t ) , where w t is the gold token at time t , normalized by each batch.", "We also include an additional loss term that uses the encoder hidden states to make a binary prediction over the input for whether a token will be copied or inserted/deleted.", "For the example from Section 4.2, the target for Hillary Clinton for president 2020 would be 0 0 1 1 0 .", "This encourages the model to select features that indicate whether the encoder input will be copied to the output.", "We use a 2-layer multi-layer perceptron and a binary cross-entropy loss LBCE .", "The joint loss is then: L = LNLL + LBCE 4.4 Decoding We use beam search for generation, as this method has proven effective for many neural language generation tasks.", "For the settings of the model that require a counter, we expand the beam by count m so that for a beam size k we calculate k m states.", "Filtering : We optionally include a constrained decoding mode where we filter the output based on the counter; when c t > 1 the end-of-sentence (EOS) score is set to and when c t = 0 the score of any word w V \\ ( S I ) is set to .", "The counter c t is decremented at every time step as in Section 4.2.", "In other words, when the counter is zero, we only allow the model to copy or generate stop words.", "When the counter is positive, we prevent the model from ending the sentence before it generates new content words and decrements the counter.", "The constrained decoding is possible with any combination of settings, with or without the counter embedding.", "We used Pytorch (Paszke et al., 2017) for all experiments.", "We used 300-dimensional vectors for the word embedding and GRU layers.", "The count embedding dimension was set to 5 with m = 5 and k = 10 for decoding.", "The category embedding dimensions were set to 5, 10, and 25 for each of the non-subreddit categories.", "We also set = 1 for multi-task learning.", "We used the Adam optimizer (Kingma and Ba, 2015) with settings of 1 = 0 .", "9 , 2 = 0 .", "999 , (cid:15) = 10 8 and a learning rate of 10 3 decaying by = 0 .", "1 every epoch.", "We used dropout (Srivastava et al., 2014) on the embeddings with a probability of 0.2 and teacher forcing with 0.5.", "We used a batch size of 100 with 10 epochs, selecting the best model on the development set based on perplexity.", "We set the minimum frequency of a word in the vocabulary to 4.", "For training, development, and testing we use the data described in Section 3.3.", "The test reference data consists of the Reddit FTFYs and the FTFYs generated from AMT.", "We evaluate our models using automated metrics and human judgments.", "Automated metrics should reflect our joint goals of 1) copying necessary context and 2) making appropriate substitutions.", "To address point 1, we use BLEU-4 as a measure of similarity between the gold FTFY and the model output.", "As the FTFY may contain significant overlap with the parent, BLEU indicates how well the model copies the appropriate context.", "As BLEU reflects mostly span selection rather than the insertion of new content, we need alternative metrics to address point", "2. However, addressing point 2 is more difficult due to the variety of possible substitutions, including named entities.", "For example, if the parent comment is jaguars for the win! and the gold FTFY is chiefs for the win! but the model produces cowboys for the win!", "(or any of 29 other NFL teams), most metrics would judge this response incorrectly even though it would be an acceptable response.", "Thus we present results using both automated metrics and human evaluation.", "As an approximation to address point 2, we attempt to measure when the model is making changes rather than just copying the input.", "To this end, we present two additional metrics novelty , a measure of whether novel content (non-stop word) tokens are generated relative to the parent comment, and partial match , a measure of whether the novel tokens in the gold FTFY match any of the novel tokens in the generated FTFY.", "To provide a reference point, we find that the partial match between two different gold FTFYs (Reddit and AMT) was 11.4% and BLEU was 47.28, which shows the difficulty of automatic evaluation.", "The scores are lower than expected because the Reddit FTFYs are noisy due to the process in Section 3.2.", "This also justifies obtaining the AMT FTFYs.", "Results are presented in Table 4.", "The baseline is a sequence-to-sequence model with attention.", "For other components, the counter embedding is referred to as COUNT, the category/subreddit embeddings as SUB, the sequence of edits as EDIT, and the multi-task copy loss as COPY.", "The models in the top half of the table use constrained decoding and those in the bottom half are unconstrained, to show the learning capabil-Reddit AMT Model Novelty BLEU-4 % Match BLEU-4 % Match C on s t r a i n e d Baseline 79.88 18.81 4.67 40.14 10.06 COUNT 89.69 22.61 4.72 47.55 12.55 COUNT + SUB + COPY 90.45 23.13 4.83 50.05 14.92 EDIT 64.64 16.12 3.37 35.48 7.33 EDIT + COUNT + SUB + COPY 82.96 19.37 4.23 42.69 11.62 U n c on s t r a i n e d Baseline 3.34 7.31 0.73 25.83 0.68 COUNT 16.19 8.51 1.95 27.68 2.36 COUNT + SUB + COPY 16.26 9.62 1.93 31.23 3.81 EDIT 7.97 35.41 1.57 74.24 1.56 EDIT + COUNT + SUB + COPY 39.99 32.59 3.25 67.56 6.25 Table 4: Automatic Evaluation ities of the models.", "For each model we compute statistical significance with bootstrap resampling (Koehn, 2004) for the constrained or unconstrained baseline as appropriate and we find the COUNT and EDIT models to be significantly better for constrained and unconstrained decoding, respectively ( p < 0 . 005 ).", "Under constrained decoding, we see that the COUNT + SUB + COPY model performs the best in all metrics, although most of the performance can be attributed to the count embedding.", "When we allow the model to determine its own output, we find that EDIT + COUNT performs the best.", "In particular, this model does well at understanding which part of the context to select, and even does better than other unconstrained models at selecting appropriate substitutions.", "However, when we combine this model with constrained decoding, the improvement is smaller than for the other settings.", "We suspect that because the EDIT model often needs to generate a DELETE-N token before a new response, these longer-term dependencies are hard to capture with constrained decoding but easier if included in training.", "We also conducted a human evaluation of the model output on the same subset of 2,625 examples described in Section 3.3.", "We performed an additional experiment on AMT where we asked annotators to rate responses on fluency, coherence, and contrast.", "Fluency is a measure of the quality of the grammar and syntax and the likelihood that a native English speaker would utter that statement.", "Coherence is a measure of whether the response makes sense, is semantically meaningful, and would be usable as a response to a claim.", "Contrast is a measure of how much the response contradicts the original comment.", "We specified that if the response is different but does not provide a contrasting view it should receive a low rating.", "Previous work (Bilu et al., 2015) used fluency, clarity/usability (which we combine into co-herence), and opposition (where we use contrast).", "We used a Likert scale where 5 is strongly agree and 1 is strongly disagree.", "We used the same data and qualification test from Section 3.3 for each category and used three annotators per example.", "We asked annotators to judge 4 different pairs: 3 model outputs and the gold Reddit 8 FTFYs for comparison.", "We include the baseline, the baseline with constrained decoding, and the best constrained model (COUNT + SUB + COPY) according to BLEU and partial match.", "We verified that the annotators understood how to rate contrast by examining the distribution of responses: the annotators selected option 3 (neither) 15% of the time and preferred to select either extreme, 5 (21%) or 1 (27%).", "Results are presented in Table 5, showing a clear preference for the best model.", "Note the degradation in fluency for the constrained baseline, as the model is prevented from generating the EOS token and may repeat tokens up to the 8 We did not evaluate the AMT FTFYs as these were generated by the same pool of annotators.", "Parent: ah yes the wonders of the free market Model: ah yes the wonders of government intervention Parent: i know that this is an unofficial mod , but xp is the best os for this machine Model: linux is the best os for this machine Parent: that 's why it 's important to get all your propaganda from infowars and brietbart Model: propaganda from fox news outlets Table 6: Model Output maximum length.", "We provide three examples of the model output in Table 6 with the first and third from the News and Politics category, demonstrating how the model handles different types of input.", "In the first example, the contrast is between allowing markets to regulate themselves versus an increased role of government.", "In the second example, the contradiction is due to the choice of operating system.", "In the third (invalid) example, the model responds to a sarcastic claim with another right-wing news organization; this response is not a contradiction since it is plausible the original speaker would also utter this statement.", "We conduct an error analysis by selecting 100 responses where the model did not partially match any of the 6 gold responses and we found 6 main types of errors.", "One error is the model identifying an incorrect substitution span while the human responses all selected a different span to replace.", "We noticed that this occurred 5 times and may require world knowledge to understand which tokens to select.", "For example, in response to the claim Hillary Clinton could have been president if not for robots , the model generates Donald Trump in place of Hillary Clinton , whereas the gold responses generate humans / votes / Trump's tweets in place of robots .", "Another type of error is when the responses are not coherent with the parent and the language model instead determines the token selection based on the most recent context (11 cases).", "For example, given the claim bb-8 gets a girlfriend and poe still does n't have a girlf :') the Reddit FTFY has boyf instead of girlf whereas the model generates ... and poe still does n't have a clue what i 'm talking about .", "We also found examples where the model chose poorly due to un-filtered jokes or errors in the training data (12 in total).", "In 15 cases, due to the constrained decoding the model repeated a word until the maximum length or appended an incoherent phrase.", "For the most common error, the model made a substitution that was not contrasting as in Table 6 (19 examples).", "Finally, we found 38 of the samples were valid responses, but did not match the gold, indicating the difficulty of automatic evaluation.", "For example, in response to the claim Nintendo is the only company that puts customers over profits , the model replaces Nintendo with Rockstar (both video game companies) while the gold FTFYs had other video game companies.", "Understanding contrast and contradiction is key to argumentation as it requires an understanding of differing points-of-view.", "Recent work examined the negation of claims via explicit negation (Bilu et al., 2015).", "Other work investigated the detection of different points-of-view in opinionated text (Al Khatib et al., 2012; Paul et al., 2010).", "Wachsmuth et al. (2017; 2018) retrieved arguments for and against a particular stance using online debate forums.", "In non-argumentative text, researchers predicted contradictions for types such as negation, antonyms, phrasal, or structural (de Marneffe et al., 2008) or those that can be expressed with functional relations (Ritter et al., 2008).", "Other researchers have incorporated entailment models (Kloetzer et al., 2013) or crowd-sourcing methods (Takabatake et al., 2015).", "Contradiction has also become a part of the natural language inference (NLI) paradigm, with datasets labeling contradiction, entailment, or neutral (Bow-man et al., 2015a; Williams et al., 2018).", "The increase in resources with contrast and contradiction has resulted in new representations with contrastive meaning (Chen et al., 2015; Nguyen et al., 2016; Vulic, 2018; Conneau et al., 2017).", "Most of this work has focused on identifying contrast or contradiction while we aim to generate contrast.", "Furthermore, while contradiction and contrast are present in these corpora, we obtain distant-labeled alignments for contrast at the word and phrase level.", "Our dataset also includes contrastive concepts and entities while other corpora primarily contain antonyms and explicit negation.", "Contrast also appears in the study of stance, where the opinion towards a target may vary.", "The SemEval 2016 Stance Detection for Twitter task (Mohammad et al., 2016) involved predicting if a tweet favors a target entity.", "The Interpretable Semantic Similarity task (Agirre et al., 2016) called to identify semantic relation types (including opposition) between headlines or captions.", "Target-specific stance prediction in debates is addressed by Hasan and Ng (2014) and Walker et al. (2012).", "Fact checking can be viewed as stance toward an event, resulting in research on social media (Lend-vai and Reichel, 2016; Mihaylova et al., 2018), politician statements (Vlachos and Riedel, 2014), news articles (Pomerleau and Rao, 2017), and Wikipedia (Thorne et al., 2018).", "In computational argumentation mining, identifying claims and other argumentative components is a well-studied task (Stab and Gurevych, 2014).", "Daxenberger et al. (2017) and Schulz et al. (2018) developed approaches to detect claims across across diverse claim detection datasets.", "Recently, a shared task was developed for argument reasoning comprehension (Habernal et al., 2018).", "The best system (Choi and Lee, 2018) used models pre-trained on NLI data (Bowman et al., 2015b), which contains contradictions.", "While this work is concerned with identification of argumentative components, we propose to generate new claims.", "In the field of argument generation, Wang and Ling (2016) train neural abstractive summarizers for opinions and arguments.", "Additional work involved generating opinions given a product rating (Wang and Zhang, 2017).", "Bilu and Slonim (2016) combine topics and predicates via a template-based classifier.", "This work involves the generation of claims but in relation to a topic.", "Other researchers generated political counter-arguments supported by external evidence (Hua and Wang, 2018) and generating argumentative dialogue by maximizing mutual information (Le et al., 2018).", "This research considers end-to-end argument generation, which may not be coherent, whereas we focus specifically on contrastive claims.", "We presented a new source of over 1 million contrastive claim pairs that can be mined from social media sites such as Reddit.", "We provided an analysis and models to filter noisy training data from 49% down to 25%.", "We created neural models for generating contrastive claims and obtained significant improvement in automated metrics and human evaluations for Reddit and AMT test data.", "Our goal is to incorporate this model into an argumentative dialogue system.", "In addition to generating claims with a contrasting view, we can also retrieve supporting evidence for the newly-generated claims.", "Additionally, we plan to experiment with using our model to improve claim detection (Daxenberger et al., 2017) and stance prediction (Bar-Haim et al., 2017).", "Our model could be used to generate artificial data to enhance classification performance on these tasks.", "To improve our model, we plan to experiment with retrieval-based approaches to handle low-frequency terms and named entities, as sequence-to-sequence models are likely to have trouble in this environment.", "One possibility is to incorporate external knowledge with entity linking over Wikipedia articles to find semantically-relevant substitutions.", "Another way to improve the model is by introducing controllable generation.", "One aspect of controllability is intention; our model produces contrastive claims without understanding the view of the original claim.", "Category embeddings partially address this issue (some labels are Liberal or Conservative), but labels are not available for all views.", "Going forward, we hope to classify the viewpoint of the original claim and then generate a claim with a desired orientation.", "Furthermore, we hope to improve on the generation task by identifying the types of claims we encounter.", "For example, we may want to change the target of the claims in some claims but in others change the polarity.", "We also plan to improve the dataset by improving our models for contrastive pair prediction to reduce noise.", "Finally, we hope that this dataset proves useful for related tasks such as textual entailment (providing examples of contradiction) and argument comprehension (providing counterexamples of arguments) or even unrelated tasks like humor or error correction.", "We would like to thank the AMT annotators for their work and the anonymous reviewers for their valuable feedback.", "The authors also thank Alyssa Hwang for providing annotations and Smaranda Muresan, Chris Kedzie, Jessica Ouyang, and Elsbeth Turcan for their helpful comments." ]
[ "abstain", "abstain", "abstain", "method", "objective", "objective", "abstain", "abstain", "abstain", "objective", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "objective", "abstain", "method", "method", "abstain", "objective", "method", "abstain", "objective", "objective", "objective", "method", "method", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "method", "abstain", "method", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "other", "other", "other", "abstain", "other", "method", "objective", "method", "result", "objective", "objective", "result", "method", "result", "abstain", "abstain", "method", "abstain", "method", "result", "abstain", "result", "method", "other", "other" ]
[ "In this paper, we focus on the problem of citing sentence generation, which entails generating a short text to capture the salient information in a cited paper and the connection between the citing and cited paper.", "We present BACO , a BA ckground knowledge-and CO ntent-based framework for citing sentence generation, which considers two types of information: (1) background knowledge by leveraging structural information from a citation network; and (2) content , which represents in-depth information about what to cite and why to cite .", "First, a citation network is encoded to provide background knowledge .", "Second, we apply salience estimation to identify what to cite by estimating the importance of sentences in the cited paper.", "During the decoding stage, both types of information are combined to facilitate the text generation.", "We then conduct joint training of the generator and citation function classification to make the model aware of why to cite .", "Our experimental results show that our framework outperforms comparative baselines.", "A citation systematically, strategically, and critically synthesizes content from a cited paper in the context of a citing paper (Smith, 1981).", "A paper's text that refers to prior work, which we herein refer to as citing sentences, forms the conceptual basis for a research question or problem; identifies issues, contradictions, or gaps with state of the art solutions; and prepares readers to understand the contributions of a citing paper, e.g., in terms of theory, methods, or findings (Elkiss et al., 2008).", "Writing meaningful and concise citing sentences that capture the gist of cited papers and identify connections between citing and cited papers is not trivial (White, 2004).", "Learning how to write up information about related work with appropriate and meaningful citations is particularly challenging for new scholars (Mansourizadeh and Ahmad, 2011).", "To assist scholars with note taking on prior work when working on a new research problem, this paper focuses on the task of citing sentence generation, which entails identifying salient information from cited papers and capturing connections between cited and citing papers.", "With this work, we hope to reduce scientific information overload for researchers by providing examples of concise citing sentences that address information from cited papers in the context of a new research problem and related write up.", "While this task cannot and is not meant to replace the scholarly tasks of finding, reading, and synthesizing prior work, the proposed computational solution is intended to support especially new researchers in practicing the process of writing effective and focused reflections on prior work given a new context or problem.", "A number of recent papers have focused on the task of citing sentence generation (Hu and Wan, 2014; Saggion et al., 2020; Xing et al., 2020), which is defined as generating a short text that describes a cited paper B in the context of a citing paper A, and the sentences before and after the citing sentences in paper A are considered as context.", "However, previous work has mainly utilized limited information from citing and cited papers to solve this task.", "We acknowledge that any such solution, including ours, is a simplification of the intricate process of how scholars write citing sentences.", "Given this motivation, we explore two sets of information to generate citing sentences, namely background knowledge in the form of citation networks, and content from both citing and cited papers, as shown in Figure", "1. Using citation networks was inspired by the fact that scholars have analyzed such networks to identify the main themes and research developments in domain areas such as information sciences (Hou et al., 2018), busi-Figure 1: An example from our dataset (source: ACL Anthology Network corpus (Radev et al., 2013).", "The red text in the citing paper is the citing sentence, and the special token #REFR indicates the citation of the cited paper.", "Our framework aims at capturing information from two perspectives: background knowledge and content .", "The background knowledge is learned by obtaining structural features of the citation network.", "The content information entails estimated sentence salience (higher salience is highlighted by darker color) in the cited paper and the corresponding citation function of the cited paper to the citing paper.", "ness modeling (Li et al., 2017), and pharmaceutical research (Chen and Guan, 2011).", "We use the content of citing and cited papers as a second set of features to capture two more in-depth content features: (1) What to cite while the overall content of a cited paper needs to be understood by the authors of the citing paper, not all content is relevant for writing citing sentences.", "Therefore, we follow the example of estimating salient sentences (Yasunaga et al., 2019) and use the predicted salience to filter crucial information that should be integrated into the resulting citing sentence; (2) Why to cite we define citation func-tion as an approximation of an author's reason for citing a paper (Teufel et al., 2006).", "A number of previous research on citation functions has used citing sentences and their context for classification (Zhao et al., 2019; Cohan et al., 2019).", "Our paper involves citation functions into citing sentence generation so that the generated citing sentences can be coherent given their context, and can still contain the motivation for a specific citation.", "In this paper, we propose a BA ckground knowledgeand CO ntent-based framework, named BACO .", "Specifically, we encode a citation network based on citation relations among papers to obtain background knowledge , and the given citing and cited papers to provide content information.", "We extend a standard pointer-generator (See et al., 2017) to copy words from cited and citing papers, and determine what to cite by estimating sentence salience in the cited paper.", "The various pieces of captured information are then combined as the context for the decoder.", "Furthermore, we extend our framework to include why to cite by jointly training the generation with citation function classification and facilitate the acquisition of the content information.", "As for the dataset, we extended the ACL Anthology Network corpus (AAN) (Radev et al., 2013) with extracted citing sentences by using RegEx.", "We then hand-annotated the citation functions on a subset of the dataset, and trained a citation function labeling model based on SciBERT (Beltagy et al., 2019).", "The resulting labeling model was then used to automatically label the rest data to build a large-scale dataset.", "We summarize our contributions as follows: We propose a BA ckground knowledgeand CO ntent-based framework, named BACO , for citing sentence generation.", "We manually annotated a subset of citing sentences with citation functions to train a SciBERT-based model to automatically label the rest data for citing sentence generation.", "Based on the results from experiments, we show that BACO outperforms comparative baselines by at least 2.57 points on ROUGE-2.", "Several studies on citing sentence generation have used keyword-based summarization methods (Hoang and Kan, 2010; Chen and Zhuge, 2016, 2019).", "To that end, they built keyword-based trees to extract sentences from cited papers as related work write-ups.", "These studies have two limitations: First, since related work sections are not simply (chronological) summaries of cited papers, synthesizing prior work in this manner is insufficient.", "Second, extractive summarization uses verbatim content from cited papers, which implies intellectual property issues (e.g., copyright violations) as well as ethical problems, such as a lack of intellectual engagement with prior work.", "Alternatively, abstractive summarization approaches, such as methods based on linear programming (Hu and Wan, 2014) and neural seq2seq methods (Wang et al., 2018), have also been explored.", "These approaches mainly focus on utilizing papers' content information, specifically on the text of cited papers directly.", "A recent paper that went beyond summarizing the content of cited papers (Xing et al., 2020) used a multi-source, pointer-generator network with a cross attention mechanism to calculate the attention distribution between the citing sentences' context and the cited paper's abstract.", "Our paper is based on the premise that citation network analysis can provide background knowledge that facilitates the understanding of papers in a field.", "Prior analyses of citation networks have been used to reveal the cognitive structure and interconnectedness of scientific (sub-)fields (Moore et al., 2005; Bruner et al., 2010), and to understand and detect trends in academic fields (You et al., 2017; Asatani et al., 2018).", "Network analysis has also been applied to citation networks to identify influential papers and key concepts (Huang et al., 2018), and to scope out research areas.", "While previous studies have shown that using text from citing papers is useful to generate citing sentences, the benefit of other content-based features of a citation (e.g., reasons for citing) is in-sufficiently understood (Xing et al., 2020).", "Extant literature on citation context analysis (Moravcsik and Murugesan, 1975; Lipetz, 1965), which focused on the connections between the citing and cited papers with respect to purposes and reasons for citations, has found that citation function (Ding et al., 2014; White, 2004) is an important indicator of why a paper chose to cite specific paper(s).", "Based on a content analysis of 750 citing sentences from 60 papers published in two prominent physics journals, Lipetz (1965) iden-tified 11 citation functions, such as questioned , af-firmed , or refuted cited paper's premises.", "Similarly, Moravcsik and Murugesan (1975) qualitatively coded the citation context of 30 articles on high energy physics, finding 10 citation functions grouped into 5 pairs: conceptual-operational, organic-perfunctory, evolutionary-juxtapositional, confirmative-negational, valuable-redundant.", "Citation context analysis has also been used to study the valence of citing papers towards cited papers (Athar, 2011; Abu-Jbara et al., 2013) by classifying citation context as positive, negative, or neutral.", "In this paper, we adopt Abu-Jbara et al. (2013)'s definition of a positive citation as a citation that explicitly states the strength(s) of a cited paper, or a situation where the citing paper's work is guided by the cited paper.", "In contrast to that, a negative citation is one that explicitly states the weakness(es) of a cited paper.", "A neutral citation is one that objectively summarizes the cited paper without an additional evaluation.", "In addition to these three categories, we also consider mixed citation contexts (Cullars, 1990), which are citations that contain both positive and negative evaluations of a cited papers, or where the evaluation is unclear.", "Given that our paper is a first attempt to integrate citation functions into citing sentence generation, we opted to start with a straightforward valence category schema before exploring more complex schemas in future work.", "We first extended the AAN 1 (Radev et al., 2013) with the extracted citing sentences using RegEx.", "We followed the process in (Xing et al., 2020) to label 1,200 randomly sampled citing sentences with their citation functions.", "The mark-up was done by 6 coders who were provided with definitions of positive, negative, neutral, and mixed citation functions, and ample examples for each valence category.", "Our codebook including definitions and examples of citation functions is shown in Table", "1. After the annotation, we randomly split the dataset into 800 instances for training and the remaining 400 for testing.", "We then used the 800 human-annotated instances to train a citation function labeling model with 10-fold cross validation.", "The labeling task was treated as a multi-class classification problem.", "Our labeling model was built upon SciBERT (Beltagy et al., 2019), a pre-trained language model based on BERT (Devlin et al., 2019) but trained on a large corpus of scientific text.", "We added a multilayer perceptron (MLP) to SciBERT, and fine-tuned the whole model on our dataset.", "As for the input, we concatenated each citing sentence with its context in the citing paper, and inserted a special tag [CLS] at the beginning and another special tag [SEP] to separate them.", "The final hidden state that corresponded to [CLS] was used as the aggregate sequence representation.", "This state was fed into the MLP, followed by the softmax function for predicting the citation function of the citing sentence.", "We report details of test results and dataset statistics in the Appendix, Section A.1.", "Our proposed framework includes an encoder and a generator, as shown in Figure", "2. The encoder takes the citation network and the citing and cited papers as input, and encodes them to provide background knowledge and content information, respectively.", "The generator contains a decoder that can copy words from citing and cited paper while retaining the ability to produce novel words, and a salience estimator that identifies key information from the cited paper.", "We then trained the framework with citation function classification to enable the recognition of why a paper was cited.", "Our encoder (the yellow shaded area in Figure 2) consists of two parts, a graph encoder that was trained to provide background knowledge based on the citation network, and a hierarchical RNN-based encoder that encodes the content information of the citing and cited papers.", "We designed a citation network pre-training method for providing the background knowledge .", "In detail, we first constructed a citation network as a directed graph G = ( V , E ) .", "V is a set of nodes/papers 2 and E is a set of directed edges.", "Each edge links a citing paper (source) to a cited paper (target).", "To utilize G in our task, we employed a graph attention network (GAT) (Velickovic et al., 2018) as our graph encoder, which leverages masked self-attentional 2 We use node and paper interchangeably layers to compute the hidden representation of each node.", "This GAT has been shown to be effective on multiple citation network benchmarks.", "We input a set of node pairs { ( v p , v q ) } into it for training of the link prediction task.", "We pre-trained our graph encoder network using negative sampling to learn the node representations h np for each paper p , which contains structural information of the citation network and can provide background knowledge for the downstream task.", "Given the word sequence { cw i } of the citing sentence's context and the word sequence { aw j } of the cited paper's abstract, we input the embedding of word tokens (e.g., e ( w t ) ) into a hierarchical RNN-based encoder that includes a word-level Bi-LSTM and a sentence-level Bi-LSTM.", "The output word-level representation of the citing sentence's context is denoted as { h cwi } , and the cited paper's abstract is encoded similarly as its word-level representation { h awj } .", "Meanwhile, their sentence-level representations are represented as { h csm } and { h asn } .", "Our generator (the green shaded area in Figure 2) is an extension of the standard pointer generator (See et al., 2017).", "It integrates both background knowledge and content information as context for text generation.", "The generator contains a decoder and an additional salience estimator that predicts the salience of sentences in the cited paper's abstract for refining the corresponding attention.", "The decoder is a unidirectional LSTM conditioned on all encoded hidden states.", "The attention distribution is calculated as in (Bahdanau et al., 2015).", "Since we considered both the citing sentence's context and the cited paper's abstract on the source side, we applied the attention mechanism to { h cwi } and { h awj } separately to obtain two attention vectors a ctx t , a abs t , and their corresponding context vectors c ctx t , c abs t at the step t .", "We then aggregated input context c t from the citing sentence's context, the cited paper's abstract, and background knowledge by applying a dynamic fusion operation based on modality attention as described in (Moon et al., 2018b,a), which selectively attenuated or ampli-fied each modality based on their importance to the task: [ att ctx ; att abs ; att net ] = ( W m [ c ctx t ; c abs t ; c net t ] + b m ) , (1) att m = exp ( att m ) (cid:80) m (cid:48) { abs , ctx , net } exp ( att m (cid:48) ) , (2) c t = (cid:88) m { abs , ctx , net } att m c mt , (3) where c net t = [ h np ; h nq ] represents the learned background knowledge for papers p and q , and is kept constant during all decoding steps t , and [ att ctx ; att abs ; att net ] is the attention vector.", "To enable our model to copy words from both the citing sentence's context and the cited paper's abstract, we calculated the generation probability and copy probabilities as follows: [ p gen , p copy1 , p copy2 ] = softmax ( W ctx c ctx t + W abs c abs t + W net c net t + W dec s t + W emb e ( w t 1 ) + b ptr ) , (4) where p gen is the probability of generating words, p copy1 is the probability of copying words from the citing sentence's context, p copy2 is the probability of copying words from the cited paper's abstract, s t represents the hidden state of the decoder at step t, and e ( w t 1 ) indicates the input word embedding.", "Meanwhile, the context vector c t , which can be seen as an enhanced representation of source-side information, was concatenated with the decoder state s t to produce the vocabulary distribution P vocab : P vocab = softmax ( V (cid:48) ( V [ s t ; c t ] + b ) + b (cid:48) ) .", "Finally, for each text, we defined an extended vocabulary as the union of the vocabulary and all words appearing in the source text, and calculated the probability distribution over the extended vocabulary to predict words w :", "P ( w ) = p gen P vocab ( w ) + p copy1 (cid:88) i : cw i = w a ctx t,i + p copy2 (cid:88) i : aw i = w a abs t,i .", "(6) 4.2.2 Salience Estimation The estimation of the salience of each sentence that occurs in a cited paper's abstract was used to identify what information needed to be concentrated for the generation.", "We assumed a sentence's salience to depend on the citing paper such that the same sentences from one cited paper can have different salience in the context of different citing papers.", "Hence, we represented this salience as a conditional probability P ( s i | D src ) , which can be interpreted as the probability of picking sentence s i from a cited paper's abstract given the citing paper D src .", "We first obtained the document representation d src of a citing paper as the average of all its ab-stract's sentence representations.", "Then, for calculating salience, which is defined as P ( s i | D src ) , we designed an attention mechanism that assigns a weight i to each sentence s i in a cited paper's abstract D tgt .", "This weight is expected to be large if the semantics of s i are similar to d src .", "Formally, we have: i = v T tanh( W doc d src + W sent h asi + b sal ) , (7) i = i (cid:80) s k D tgt k , (8) where h asi is the i th sentence representation in the cited paper's abstract, v , W doc , W sent and b sal are learnable parameters, and i is the salience score of the sentence s i .", "We then used the estimated salience of sentences in the cited paper's abstract to update the word-level attention of the cited paper's abstract { h awj } so that the decoder can focus on these important sentences during text generation.", "Considering that the estimated salience i is a sentence weight, we determined each token in a sentence to share the same value of i .", "Accordingly, the new attention a abs t of the cited paper's abstract became a abs t = i a abs t .", "After normalizing a abs t , the context vector c abs t was updated accordingly.", "During model training, the objective of our framework covers three parts: generation loss, salience estimation loss, and citation function classification.", "The generation loss was based on the prediction of words from the decoder.", "We minimized the negative log-likelihood of all target words w t and used them as the objective function of generation: L gen = (cid:88) t log P ( w t ) .", "To include extra supervision into the salience estimation, we adopted a ROUGE-based approximation (Yasunaga et al., 2017) as the target.", "We assume citing sentences to depend heavily on salient sentences from the cited papers' abstracts.", "Based on this premise, we calculated the ROUGE scores between the citing sentence and sentences in the corresponding cited paper's abstract to obtain an approximation of the salience distribution as the ground-truth.", "If a sentence shared a high ROUGE score with the citing sentence, this sentence would be considered as a salient sentence because the citing sentence was likely to be generated based on this sentence, while a low ROUGE score implied that this sentence may be ignored during the generation process due to its low salience.", "Kull-backLeibler divergence was used as our loss function for enforcing the output salience distribution to be close to the normalized ROUGE score distribution of sentences in the cited paper's abstract: L sal = DKL ( R(cid:107) ) , (10) R i = r ( s i ) (cid:80) s k D tgt r ( s k ) , (11) where , R R m , R i refers to the scalar indexed i in R ( 1 i m ), and r ( s i ) is the average of ROUGE-1 and ROUGE-2 F1 scores between the sentence s i in the cited paper's abstract and the citing sentence.", "We also introduced a hyper-parameter as a constant rescaling factor to sharpen the distribution.", "We added a supplementary component to enable the citation function classification to be trained with the generator, aiming to make the generation conscious of why to cite .", "Following a prior general pipeline of citation function classification (Cohan et al., 2019; Zhao et al., 2019), we first concatenated the last hidden state s T of the decoder, which we considered as a representation of the generated citing sentence, with the document representation d ctx of the citing sentence's context.", "Here, d ctx was calculated as the average of its sentence representations.", "We then fed the concatenated representation into an MLP followed by the softmax function to predict the probability of the citation function y func for the generated citing sentence.", "Cross-entropy loss was set as the objective function for training the classifier with the ground truth label y func , which is a one-hot vector: L func = 1 NN (cid:88) i =1 K (cid:88) j =1 y i func ( j ) log y i func ( j ) , (12) where N refers to the size of training data and K is the number of different citation functions.", "J ( ) = L gen + SL sal + FL func , (13)", "Following previous work, we report ROUGE-1 (unigram), ROUGE-2 (bigram), and ROUGE-L (longest common subsequence) scores to evaluate the generated citing sentences (Lin, 2004).", "Implementation details are shown in the Appendix, Section A.2.", "We also report ROUGE F1 score on our dataset.", "Finally, we compare our model to competitive baselines: PTGEN (See et al., 2017): is the original pointer-generator network.", "EXT-Oracle (Xing et al., 2020): selects the best possible sentence from the abstract of a cited paper that gives the highest ROUGE w.r.t. the ground truth.", "This method can be seen as an upper bound of extractive methods.", "PTGEN-Cross (Xing et al., 2020): enhances the original pointer-generator network with a cross attention mechanism applied to the citing sentence's context and the cited paper's abstract.", "Additionally, we report results from using several extractive methods that have been used for summarization tasks 3 , including: LexRank (Erkan and Radev, 2004): is an unsupervised graph-based method for computing relative importance of extractive summarization.", "As the results in Table 2 show, our proposed framework (BACO) outperformed all of the considered baselines.", "BACO achieved scores of 32.54 (ROUGE-1), 9.71 (ROUGE-2), and 24.90 (ROUGE-L).", "We also observed that the extractive methods performed comparatively poorly and notably worse than the abstractive methods.", "All abstractive methods did better than EXT-Oracle; a result different from performance on other summarization tasks, such as news document summarization.", "We think that this deviation from prior performance outcomes is because citing sentence in the domain of scholarly papers contain new expressions when referring to cited papers, which requires high-level summarizing or paraphrasing of cited papers instead of copying sentences verbatim from cited papers.", "Our results suggest that extractive methods may not be suitable for our task.", "Among the extractive methods we tested, we observed EXT-Oracle to be superior to others, which aligns with our expectation of EXT-Oracle to serve as an upper bound of extractive methods.", "For abstractive methods, our framework achieved about 2.57 points improvement on ROUGE-2 F1 score compared to PTGEN-Cross.", "We assume two reasons for this improvement: First, BACO uses richer text features, e.g., what to cite (sentence salience estimation) and why to cite (citation function clas-sification), that provide useful information for this task.", "Second, we included structural information from the citation network, which might offer supplemental background knowledge about a field that is not explicitly covered by the given cited and citing papers.", "We performed an ablation study to investigate the efficacy of the three main components in our framework: (1) we removed the node features (papers) that are output from the graph encoder to test the effectiveness of background knowledge ; (2) we removed the predicted salience of sentences in the abstracts of cited papers to assess the effectiveness of one part of content ( what to cite ); and (3) we removed the training of citation function classification and only trained the generator to test the effectiveness of the other part of content ( why to cite ).", "As the removal of node features of papers reModels R-1 R-2 R-L Extractive LexRank 11.96 1.04 9.69 TextRank 12.35 1.19 10.04 EXT-Oracle 22.60 4.21 16.83 Abstractive PTGEN 24.60 6.16 19.19 PTGEN-Cross 27.08 7.14 20.61 BACO 32.54 9.71 24.90 Table 2: Experimental results for our framework and comparative models.", "duces the input to the dynamic fusion operation for the context vector (Equation 1), we changed Equation 2 to a sigmoid function so that the calculated attention becomes a vector of size 2 when combining the context vectors of the citing sentence's context and the cited paper's abstract.", "Table 3 presents the results of the ablation study.", "We observed the ROUGE-2 F1 score to drop by 0.81 after the removal of the nodes (papers) feature.", "This indicates that considering background knowledge in a structured representation is useful for citing sentence generation.", "The ROUGE-2 F1 score dropped by 2.20 after disregarding salience of sentences in the cited paper.", "This implies that sentence-level salience estimation is beneficial, and it can be used to identify important sentences during the decoding phase so that the decoder can pay higher attention to those sentences.", "This process Gold BACO PTGEN-Cross Fluency 4.91 3.64 3.52 Relevance 4.86 3.07 2.64 Coherence 4.88 2.77 2.61 Overall 4.79 2.95 2.69 Table 4: Human evaluation results.", "might also align with how scholars write citing sentences: they focus on specific parts or elements of cited papers, e.g., methods or results, and do not consider all parts equally when writing citing sentences.", "Lastly, the ROUGE-2 F1 score dropped by 1.04 after the removal of citation function classification; indicating that this feature is also helpful to the text generation task.", "We conclude that for a citing sentence generation, considering and training a model on background knowledge, sentence salience, and citation function improves the performance.", "We present an illustrative example generated by our re-implementation of PTGEN-Cross versus by BACO, and compare both to ground truth (see Appendix, Section A.3).", "The output from BACO showed a higher overlap with the ground truth, specifically because it included background that is not explicitly covered in the cited paper.", "Furthermore, our output contained the correct citation function (... have been shown to be effective), which was present in the ground truth, but missing in PTGEN-Cross's output.", "We sampled 50 instances from the generated texts.", "Three graduate students who are fluent in English and familiar with NLP were asked to rate citing sentences produced by BACO and the re-implemented PTGEN-Cross with respect to four aspects on a 1 (very poor) to 5 (excellent) point scale: fluency (whether a citing sentence is fluent), relevance (whether a citing sentence is relevant to the cited paper's abstract), coherence (whether a citing sentence is coherent within its context), and overall quality.", "Every instance was scored by the three judges, and we averaged their scores (Table 4).", "Our results showed that citing sentences generated by BACO score were generally better than output by PTGEN-Cross (e.g., Relevance score: BACO=3.07; PTGEN-Cross=2.64).", "This finding provided further evidence for the effectiveness of including the features we used for this task.", "We have brought together multiple pieces of information from and about cited and citing papers to improve citing sentence generation.", "We integrated them into BACO , a BA ckground knowledgeand CO ntent-based framework for citing sentence generation, which learns and uses information that relate to (1) background knowledge ; and (2) content .", "Extensive experimental results suggest that our framework outperforms competitive baseline models.", "This work is limited in several ways.", "We only demonstrated the utility of our model within the standard RNN-based seq2seq framework.", "Secondly, our citation functions scheme only contained valence-based items.", "Finally, while this method is intended to support scholars in practicing strategic note taking on prior work with respect to a new literature review or research project, we did not evaluate the usefulness or effectiveness of this training option for researchers.", "In future work, we plan to investigate the adaptation of our framework into more powerful models such as Transformer (Vaswani et al., 2017).", "We also hope to extend our citation functions scheme beyond valence of the citing sentences to more fine-grained categories, such as those outlined in Moravcsik and Murugesan (1975) and Lipetz (1965).", "This work is intended to support scholars in doing research, not to replace or automate any scholarly responsibilities.", "Finding, reading, understanding, reviewing, reflecting upon, and properly citing literature are key components of the research process and require deep intellectual engagement, which remains a human task.", "The presented approach is meant to help scholars to see examples for how to strategically synthesize scientific papers relevant to a certain topic or research problem, thereby helping them to cope with information overload (or research deluge) and honing their scholarly writing skills.", "Additional professional responsibilities also still apply, such as not violating intellectual property/ copyright issues.", "We believe that this work does not present foreseeable negative societal consequence.", "While not intended, our method may be misused for the automated generation of parts of literature reviews.", "We strongly discourage this misuse as it violates basic assumptions about scholarly diligence, responsibilities, and expectations.", "We advocate for our method to be used as a scientific writing training tool." ]
[ "method", "method", "abstain", "method", "abstain", "method", "result", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "objective", "method", "abstain", "objective", "abstain", "abstain", "abstain", "method", "result", "abstain", "objective", "objective", "result", "abstain", "abstain", "objective", "abstain", "method", "abstain", "objective", "method", "result", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "method", "objective", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "other", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "other", "abstain", "result", "abstain", "result", "method", "result", "method", "objective", "abstain", "objective", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method" ]
[ "While state-of-the-art NLP models have been achieving the excellent performance of a wide range of tasks in recent years, important questions are being raised about their robustness and their underlying sensitivity to systematic biases that may exist in their training and test data.", "Such issues come to be manifest in performance problems when faced with out-of-distribution data in the field.", "One recent solution has been to use counterfactually augmented datasets in order to reduce any reliance on spurious patterns that may exist in the original data.", "Producing high-quality augmented data can be costly and time-consuming as it usually needs to involve human feedback and crowdsourcing efforts.", "In this work, we propose an alternative by describing and evaluating an approach to automatically generating counterfactual data for the purpose of data augmentation and explanation.", "A comprehensive evaluation on several different datasets and using a variety of state-of-the-art benchmarks demonstrate how our approach can achieve significant improvements in model performance when compared to models training on the original data and even when compared to models trained with the benefit of human-generated augmented data.", "Deep neural models have recently made remarkable advances on sentiment analysis (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Xie et al., 2020).", "However, their implementation in practical applications still encounters significant challenges.", "Of particular concern, these models tend to learn intended behavior that is often associated with spurious patterns (artifacts) (Jo and Bengio, 2017; Slack et al., 2020a).", "As an example, in the sentence Nolan's films always shock people, thanks to his superb directing skills , the most influential word for the prediction of a positive sentiment should be superb instead of Nolan or film .", "The issue of spurious patterns also partially affects the out-of-domain (OOD) generalization of the models trained on independent, identical distribution (IID) data, leading to performance decay under distribution shift (Quionero-Candela et al., 2009; Sugiyama and Kawanabe, 2012; Ovadia et al., 2019).", "Researchers have recently found that such concerns about model performance decay and social bias in NLP come about out-of-domain because of a sensitivity to semantically spurious signals (Gardner et al., 2020), and recent studies have uncovered a problematic tendency for gender bias in sentiment analysis (Zmigrod et al., 2019; Maudslay et al., 2019; Lu et al., 2020).", "To this end, one of the possible solutions is data augmentation with counterfactual examples (Kaushik et al., 2020) to ensure that models learn real causal associations between the input text and labels.", "For example, a sentiment-flipped counterfactual of last example could be Nolan's movies always bore people, thanks to his poor directorial skills. .", "When added to the original set of training data, such kinds of counterfactually augmented data (CAD) have shown their benefits on learning real causal associations and improving the model robustness in recent studies (Kaushik et al., 2020, 2021; Wang and Culotta, 2021).", "Unlike gradient-based adversarial examples (Wang and Wan, 2019; Zhang et al., 2019; Zang et al., 2020), which cannot provide a clear boundary between positive and negative instances to humans, counterfactuals could provide human-like logic to show a modification to the input that makes a difference to the output classification (Byrne, 2019).", "Recent attempts for generating counterfactual examples (also known as minimal pairs) rely on human-in-the-loop systems.", "Kaushik et al. (2020) proposed a human-in-the-loop method to generate CAD by employing human annotators to generate sentiment-flipped reviews.", "The human labeler is asked to make minimal and faithful edits to produce counterfactual reviews.", "Similarly, Srivastava et al. (2020) presented a framework to leverage strong prior (human) knowledge to understand the possible distribution shifts for a specific machine learning task; they use human commonsense reasoning as a source of information to build a more robust model against spurious patterns.", "Although useful for reducing sensitivity to spurious correlations, collecting enough high-quality human annotations is costly and time-consuming.", "The theory behind the ability of CAD to improve model robustness in sentiment analysis is discussed by Kaushik et al. (2021), where researchers present a theoretical characterization of the impact of noise in causal and non-causal features on model generalization.", "However, methods for automatically generating CAD have received less attention.", "The only existing approach (Wang and Culotta, 2021) has been tested on the logistic regression model only, despite the fact that recent state-of-the-art methods for sentiment classification are driven by neural models.", "Also, their automatically generated CAD cannot produce competitive performance compared to human-generated CAD.", "We believe that their method does not sufficiently leverage the power of pre-trained language models and fails to generate fluent and effective CAD.", "In addition, the relationships between out-of-domain generalization and sensitivity to spurious patterns were not explicitly investigated by Wang and Culotta (2021).", "To address these issues, we use four benchmark datasets (IMDB movie reviews as hold-out test while Amazon, Yelp, and Twitter datasets for out-of-domain generalization test) to further explore the efficacy of CAD for sentiment analysis.", "First, we conduct a systematic comparison of several different state-of-the-art models (Wang and Culotta, 2021).", "This reveals how large Transformer-based models (Vaswani et al., 2017) with larger parameter sizes may improve the resilience of machine learning models.", "Specifically, we have found that for increasing parameter spaces, CAD's performance benefit tends to decrease, regardless of whether CAD is controlled manually or automatically.", "Second, we introduce a novel masked language model for helping improve the fluency and grammar correctness of the generated CAD.", "Third, we add a fine-tuned model as a discriminator for automatically evaluating the edit-distance, using data generated with minimal and fluent edits (same requirements for human annotators in Kaushik et al. (2020)) to ensure the quality of generated counterfactuals.", "Experimental results show that it leads to significant prediction benefits using both hold-out tests and generalization tests.", "To the best of our knowledge, we are the first to automatically generate counterfactuals for use as augmented data to improve the robustness of neural classifiers, which can outperform existing, state-of-the-art, human-in-the-loop approaches.", "We will release our code and datasets on GitHub 1 .", "This work mainly touches on three important areas: approaches to evaluation that go beyond traditional accuracy measures (Bender and Koller, 2020; Warstadt et al., 2020), the importance of counterfactuals in eXplainable AI (XAI) (Byrne, 2019; Keane and Smyth, 2020), and out-of-domain generalization in sentiment analysis (Kim and Hovy, 2004; Zhang et al., 2018; Zhang and Zhang, 2019).", "There has been an increasing interest in the role of Robustness Causal Thinking in ML, often by leveraging human feedback.", "Recently, some of the standard benchmark datasets have been challenged (Gardner et al., 2020; Ribeiro et al., 2020), in which the model performance is significantly lower on contrast sets than on original test sets; a difference of up to 25% in some cases.", "Researchers propose counterfactual data augmentation approaches for building robust models (Maudslay et al., 2019; Zmigrod et al., 2019; Lu et al., 2020), and find that spurious correlations threaten the model's validity and reliability.", "In an attempt to address this problem, Kaushik et al. (2020) explore opportunities for developing human-in-the-loop systems by using crowd-sourcing to generate counterfactual data from original data, for data augmentation.", "Teney et al. (2020) shows the continuous effectiveness of CAD in computer vision (CV) and NLP.", "1 https://github.com/lijiazheng99/Counterfactuals-for-Sentiment-Analysis", "also shares important conceptual features with our work.", "Since human counterfactual explanations are minimal in the sense that they select a few relevant causes (Byrne, 2019; Keane and Smyth, 2020) as is the requirement of minimal edits in our generation process.", "This has been explored more in the field of CV (Goyal et al., 2019; Kenny and Keane, 2021), but investigated less in NLP.", "Recent work (Jacovi and Goldberg, 2020) highlight explanations of a given causal format, and Yang et al. (2020a) generate counterfactuals for explaining the prediction of financial text classification.", "We propose a similar but different research question, that is, whether the automatically generated counterfactual can be used for data augmentation to build more robust models, which has not been considered by the previous methods in XAI (Pedreschi et al., 2019; Slack et al., 2020b; Yang et al., 2020b; Ding et al., 2020).", "In the case of Sentiment Analysis , most of the previous works report experiments using a holdout test on the IID dataset (Liu, 2012; Yang et al., 2016; Johnson and Zhang, 2017).", "The current state-of-the-art methods make use of large pre-trained language models (e.g., BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and SMART-RoBERTa (Jiang et al., 2020)) for calculating input represnta-tions.", "It has been shown that these methods can suffer from spurious patterns (Kaushik et al., 2020; Wang and Culotta, 2021).", "Very recently, Wang and Culotta (2021) provide a starting point for exploring the efficacy of automatically generated CAD for sentiment analysis, but it is still based on IID hold-out tests only.", "However, spurious patterns in the training and test sets could be tightly coupled, which may limit the possibility of observing their attendant accuracy issues using a hold-out test methodology.", "For this reason, we designed an indirect method for evaluating the robustness of models, by comparing the performance of models trained on original and augmented data using out-of-domain data.", "The prediction benefit for out-of-domain data should provide some evidence about whether a model's sensitivity to spurious patterns has been successfully mitigated.", "The resulting counterfactuals can be used for data augmentation and can also provide contrastive explanations for classifiers, and important and desirable consideration for the recent move towards more XAI (Ribeiro et al., 2016; Lundberg and Lee, 2017; Lipton, 2018; Pedreschi et al., 2019; Slack et al., 2020b).", "We propose a new approach for automatically generating counterfactuals to enhance the robustness of sentiment analysis models by inverting the sentiment of causally important terms according to Algorithm 1 and based on the following stages:", "1. The identification of genuine causal terms using self-supervised contextual decomposition (Section 3.1).", "2. Generating counterfactual samples by", "(a) RM-CT (removing causal terms) and", "(b) REP-CT (replacing the causal terms) (Section 3.2).", "3. Selecting the human-like counterfactuals using MoverScore.", "(Zhao et al., 2019) (Section 3.3).", "The end result will be a set of counterfactuals that can be used to augment an existing dataset.", "To identify causally important terms, we propose a hierarchical method, based on the sampling and sensitivity of contextual decomposition technique from Jin et al. (2019), by incrementally removing words from a sentence in order to evaluate the model's sensitivity to these words.", "Significant changes in model outputs suggest the removal of important terms.", "For example, removing the word best from The movie is the best that I have ever seen. , is likely to alter a model's sentiment prediction more than the removal of other words from the sentence; thus best is an important word with respect to this sentence's sentiment.", "In a similar way, phrases beginning with negative pronouns will likely be important; for instance, not satisfy you is important in This movie could not satisfy you .", "Given a word (or phrase starting with negative limitations) w in the sentence s , the importance of w can be calculated as in Equation 1 where s \\ p denotes the sentence that resulting after masking out a single word (or a negative phrase as above).", "We use l ( s \\ p ; (cid:98) s ) to represent the model prediction after replacing the masked-out context, while (cid:98) s is a input sequence sampled from the input s .", "\\ p indicates the operation of masking out the phrase p in a input document D from the training set.", "The specific candidate causal terms found by this masking operation vary for different prediction models.", "This approach and the scoring function in Equation 1 is used in Algorithm 1 in two ways, to generate two types of plausible counterfactuals.", "First, it is used to identify words to remove from a sentence to produce a plausible counterfactual.", "This is referred to as RM-CT and is performed by lines 35 in Algorithm 1; for a sentence S ( i ) , it's correctly labeled sentiment words are identified (line 3), and sorted based on Equation 1 (line 4) with classifier C, and the most important of these words is removed from S ( i ) to produce S ( i ) rm (line 5).", "Second, the REP-CT technique instead replaces each causally important sentiment word in S ( i ) with an alternative word that has an opposing sentiment polarity (lines 6-11 in Algorithm 1).", "To do this the words in S ( i ) are each considered for replacement in order of their importance (lines 6 & 7) Algorithm 1 Generating plausible counterfactual instances.", "Input: Test document D ( n ) = { P 1 , P 2 , ..., P n } , with corresponding ground-truth labels Y , pre-trained Mask Language Model MLM , fine-tuned transformer classifier C , Positive Word Dictionaries POS , Negative Word Dictionaries NEG .", "( pos and neg are predicates for positive and negative labels)", "Output: Plausible counterfactual D ( k ) cf = { D ( k ) rep , D ( k ) rm } 1: for P k in D ( n ) do 2: for S ( i ) , Y i in P k do 3: (cid:98) S ( i ) (cid:8) w S ( i ) | ( w POS Y i = pos ) ( w NEG Y i = neg ) (cid:9) 4: S ( i ) sorted sort (cid:0)(cid:98) S ( i ) , key = ( w, (cid:98) S ( i ) ) (cid:1) ( eq. 1) 5: S ( i ) rm S ( i ) sorted [1 :] 6: S ( i ) rep S ( i ) sorted 7: for w S ( i ) rep do 8: W p MLM (cid:0) S ( i ) mask ( w ) , S ( i ) rep (cid:1) 9: W c { w W p | ( w POS Y i ! = pos ) ( w NEG Y i ! = neg ) (cid:9) 10: S ( i ) rep ( w ) sort (cid:0) W c , key = ( w, W c ) (cid:1) [0] 11: end for 12: P ( k ) rm P ( k ) rm + S ( i ) rm 13: P ( k ) rep P ( k ) rep + S ( i ) rep 14: end for 15: D ( n ) rm D ( n ) rm + P ( k ) rm 16: D ( n ) rep D ( n ) rep + P ( k ) rep 17: end for 18: return D ( n ) rm , D ( n ) rep to create a new sentence S ( i ) rep .", "For each word w we use a masked language model (MLM) to generate a set of plausible replacements, W p (line 8), and a subset of these, W c , as replacement candidates if their sentiment is different from the sentiment of S ( i ) , which is given by Y i (line 9).", "Here we are using the BERT-base-uncased as the pre-trained MLM for SVM and BiLSTM models 1 .", "The size of candidate substitutions found by MLM output is set to 100 for all models.Then, W c is sorted in descending order of importance using Equation 1 and the most important candidate is selected and used to replace w in S ( i ) rep (line 10).", "Algorithm 1 continues in this fashion to generate counterfactual sentences using RM-CT and REP-CT for each sentence in each paragraph of the target document 2 .", "It returns two counterfactual documents, which correspond to documents produced from the RM-CT and REP-CT sentences; see lines 1518.", "The above approach is not guaranteed to always generate counterfactuals.", "Typically, reviews that 1 For Transformers-based models, we use their own pre-trained MLM (e.g., RoBERTa and XLNet) as the generator.", "2 Generating one counterfactual edit for an IMDB instance takes an average of 3 .", "4 seconds based on the RoBERTa-Large model.", "cannot be transformed into plausible counterfactuals contain spurious associations that interfere with the model's predictions.", "For example, in our method, the negative review The film is pretty bad, and her performance is overacted will be first modified as The film is pretty good, and her performance is lifelike .", "The revised review's prediction will remain negative.", "Meanwhile, the word her will be identified as a potential causal term.", "To alleviate this problem, we further conduct the substitution of synonyms for those instances that have been already modified with antonym substitution by using causal terms.", "As an example, we will continue replacing the word her with their until the prediction has been flipped; see also Zmigrod et al. (2019) for related ideas.", "In conclusion, then, the final augmented dataset that is produced of three parts: (1) counterfactuals generated by RM-CT; (2) counterfactuals generated by REP-CT; (3) adversarial examples generated by synonym substitutions.", "When generating plausible counterfactuals, it is desirable to make minimal changes so that the resulting counterfactual is as similar as possible to the original instance (Miller, 2019; Keane and Smyth, 2020).", "To evaluate this for the approach described we use the MoverScore (Zhao et al., 2019) an edit-distance scoring metric originally designed for machine translation which confirms that the MoverScore for the automatic CAD instances is marginally higher when compared to human-generated counterfactuals, indicated greater similarity between counterfactuals and their original instances.", "The MoverScore between human-generated counterfactuals and original reviews is 0.74 on average (minimum value of 0.55) and our augmented data results in a slightly higher average score than human-generated data for all models.", "The generated counterfactuals and synonym substitutions that achieve a MoverScore above 0.55 are combined with the original dataset for training robust classifiers.", "Our evaluation uses three different kinds of datasets, in-domain data, challenge data, and out-of-domain data.", "We first adopt two of the most popular benchmark datasets SST-2 and IMDB (Maas et al., 2011) to show the recent advances on sentiment analysis with the benefit of pre-trained models.", "However, we mainly focus on the robustness of various models for sentiment analysis in this work, rather than in-domain accuracy.", "Hence, following Wang and Culotta (2021) and Kaushik et al. (2020), we perform binary sentiment classification experiments on the IMDB dataset sampled from Maas et al. (2011) that contains 1707 training, 245 validation, and 488 testing examples with challenge dataset (paired counterfactuals).", "Based on the in-domain IMDB data, Kaushik et al. (2020) employ crowd workers not to label documents, but to revise movie review to reverse its sentiment, without making any gratuitous changes.", "We directly use human-generated counterfactuals by Kaushik et al. (2020) as our challenge data, enforcing a 50:50 class balance.", "We also evaluate our method on different out-of-domain datasets, including Amazon reviews (Ni et al., 2019) from six genres: beauty, fashion, appliances, gift cards, magazines, and software, a Yelp review dataset, and the Semeval-2017 Twitter dataset (Rosenthal et al., 2017).", "These have all been sampled to provide a 50:50 label split.", "The size of the training data has been kept the same for all methods, and the results reported are the average from five runs to facilitate a direct comparison with baselines (Kaushik et al., 2020, 2021).", "We first describe the performance of the current state-of-the-art methods on sentiment analysis based on the SST-2 and IMDB benchmark datasets.", "Next, we will discuss the performance benefits by using our automatically generated counterfactuals Models Parameter Training / Testing data AC: (Our method) O/O CF/O CF/CF O/CF C/O AC/O C/CF AC/CF SVM(TF-IDF) -80.0 58.3 91.2 51.0 83.7 84.8 87.3 86.1 Bi-LSTM 0.2M 79.3 62.5 89.1 55.7 81.5 82.2 92.0 88.5 Transformer-based Models BERT [ICLR,2021] 110M 87.4 80.4 90.8 82.2 88.5 90.6 95.1 92.2 WWM-BERT-Large 335M 91.2 86.9 96.9 93.0 91.0 91.8 95.3 94.1 XLNet-Large 340M 95.3 90.8 98.0 93.9 93.9 94.9 96.9 95.5 RoBERTa-Large 355M 93.4 91.6 96.9 93.0 93.6 94.1 96.7 94.3 Table 2: The accuracy of various models for sentiment analysis using different datasets, including the human-generated counterfactual data and counterfactual samples generated by our pipeline.", "on an in-domain test.", "We further compare our method, human-label method, and two state-of-the-art style-transfer methods (Sudhakar et al., 2019; Madaan et al., 2020) in terms of the model robustness on generalization test.", "Notably, we provide an ablation study lastly to discuss the influence of edit-distance for performance benefits.", "As the human-generated counterfactuals (Kaushik et al., 2020) are sampled from Maas et al. (2011), the results in Table 1 cannot be directly compared with Table 2 3 .", "As shown in Table 1, by comparing BiLSTM to Transformer-base methods, it can be seen that remarkable advances in sentiment analysis have been achieved in recent years.", "On SST-2, SMART-RoBERTa (Jiang et al., 2020) outperforms Bi-LSTM by 10.8% (97.5% vs. 86.7%) accuracy, where a similar improvement is observed on IMDB (96.3% vs. 86.0%).", "According to the results, we select the following models for our experiments, which covers a spectrum of statistical, neural and pre-trained neural methods: SVM (Suykens and Vandewalle, 1999), Bi-LSTM (Graves and Schmidhuber, 2005), BERT-Base (Devlin et al., 2018), RoBERTa-Large (Liu et al., 2019), and XLNet-Large (Yang et al., 2019).", "The SVM model for sentiment analysis is from scikit-learn and uses TF-IDF (Term Frequency-Inverse Document Frequency) scores, while the Transformer-based models are built based on the Pytorch-Transformer package 4 .", "We keep the prediction models the same as Kaushik et al. (2020), except for Naive Bayes, which has been abandoned due to its high-variance performance shown in our experiments.", "In the following experiments, we only care about whether the robustness of models has been improved when training on the augmented dataset (original data & CAD).", "Different counterfactual examples have been generated for different models in terms of their own causal terms in practice, while the hyper-parameters for different prediction models are all identified using a grid search conducted over the validation set.", "On the Influence of Spurious Patterns.", "As shown in Table 2, we find that the linear model (SVM) trained on the original and challenge (human-generated counterfactuals) data can achieve 80% and 91.2% accuracy testing on the IID hold-out data, respectively.", "However, the accuracy of the SVM model trained on the original set when testing on the challenge data drops dramatically ( 91.2% vs. 51% ), and vice versa ( 80% vs. 58.3% ).", "Similar findings were reported by Kaushik et al. (2020), where a similar pattern was observed in the Bi-LSTM model and BERT-base model.", "This provides further evidence supporting the idea that the spurious association in machine learning models is harmful to the performance on the challenge set for sentiment analysis.", "On the Benefits of Robust BERT.", "As shown in Table 3, we also test whether the sensitivity to spurious patterns has been eliminated in the robust BERT model.", "We notice that the correlations of the real causal association superb and poor are improved from 0.213 to 0.627 and -0.551 to -0.999, respectively.", "While the correlation of spurious association film is decreased from 0.446 to 0.019 and -0.257 to -7e-7 on positive and the negative samples, respectively.", "This shows that the model trained with our CAD data does provide robustness against spurious patterns.", "On the Influence of Model Size.", "Previous works (Kaushik et al., 2021; Wang and Culotta, 2021) have not investigated the performance benefits on larger pre-trained models.", "While we further conduct experiments on various Transformer-based models with different parameter sizes to explore whether the larger transformer-based models can still enjoy the performance benefits of CAD (Table 2).", "We observe that although the test result can increase with the parameter size increasing (best for 94.9% using XLNet), the performance benefits brought by human-generated CAD and the auto-generated CAD declines continuously with the parameter size increase.", "For example, the BERT-base-uncased model trained on the auto-generated combined dataset can receive 3.2% (90.6% vs. 87.4%) improvement on accuracy while performance increases only 0.6% (91.8% vs. 91.2%) on accuracy for WWM-BERT-Large.", "It suggests that larger pre-trained Transformer models may be less sensitive to spurious patterns.", "Robustness in the In-domain Test.", "We can see that all of the models trained on automatic CAD shown as AC in the Table 2 can outperform the human-generated CAD varying with the models (AC/O vs. C/O) as follows: SVM (+1.1%), Bi-LSTM (+0.7%), BERT-base-uncased (+2.1%), BERT-Large (+0.8%), XLNet-Large (+1.0%), and RoBERTa-Large (+0.5%) when testing on the original data.", "If we adopt the automatic CAD (AC), we note a distinct improvement in Table 2 across all models trained on the challenge data in terms of 11.3% in average (AC/O vs. CF/O), whereas the human-generated CAD can achieve 10.2% accuracy improvement (C/O vs. CF/O) in average.", "It is noteworthy that the human-generated CAD can slightly outperform our method when testing on the Out-of-domain Test using Different Training Data SVM BERT Accuracy on Amazon Reviews Orig & CAD (Our Method) (3.4k) 78.6 84.7 Orig & CAD (By Human) (3.4k) 79.3 83.3 Orig.", "human-generated (CF) data, it may be because the training and test sets of the human-generated (CF) data are generated by the same group of labelers.", "Robustness in the Generalization Test.", "We explore how our approach makes prediction models more robust out-of-domain in Table 4. For direct comparison between our method and the human-generated method, we adopt the fine-tuned BERT-base model trained with the augmented dataset (original & automatically revised data).", "The fine-tuned model is directly tested for out-of-domain data without any adjustment.", "As shown in Table 4, only our method and the human-label method can outperform the BERT model trained on the original data with average 6.5% and 5.3% accuracy improvements, respectively.", "Our method also offers performance benefits over three datasets even when compared to the human-label method on BERT.", "Neural Method vs. Statistical Method.", "As shown in Table 4, the performance of the SVM model with automatic CAD is more robust than other automated methods (Sudhakar et al., 2019; Madaan et al., 2020) across all datasets.", "However, the human-labeled CAD can improve Amazon re-views' accuracy compared to our method using the SVM model by 0.7%.", "It indicates that human-generated data may lead to more performance benefits on a statistical model.", "Automatic CAD vs. Style-transfer Methods.", "As shown in Table 4, the style-transfer results are consistent with Kaushik et al. (2021).", "We find that the sentiment-flipped instances generated by style-transfer methods degrade the test accuracy for all models on all kinds of datasets, whereas our method has achieved the best performance for all settings.", "It suggests that our method have its absolute advantage for data augmentation in sentiment analysis when compared to the state-of-the-art style-transfer models.", "Our Methods vs. Implausible CAD.", "The authors of the only existing approach for automatically generating CAD (Wang and Culotta, 2021) report that their methods are not able to match the performance of human-generated CAD.", "Our methods consistently outperform human-labeled methods on both In-domain and Out-of-domain tests.", "To further provide quantitative evidence of the influence of the edit-distance in automatic CAD, we demonstrate an ablation study in Table 6.", "The result shows that the quality of the generated CAD, which is ignored in the previous work Wang and Culotta (2021), is crucial when training the robust classifiers.", "In particular, the BERT model fine-tuned with implausible CAD (below the threshold) can receive comparable negative results with the style-transfer samples, alongside the performance decrease on all datasets, except for Twitter.", "The three most popular kinds of edits are shown in Table 5.", "These are, negation words removal, sentiment words replacement, and the combination of these.", "It can be observed from these examples that we ensure the edits on original samples should be minimal and fluent as was required previously with human-annotated counterfactuals (Kaushik Training Data IMDB Out-of-domain Test BERT-base-uncased Orig.", "et al., 2020).", "As shown in Table 5, we flipped the model's prediction by replacing the causal terms in the phrase badly directed, badly acted and boring to well directed, well acted and entertaining , or removing No laughs throughout the movie. to Laughs throughout the movie for a movie review.", "We also noticed that our method may face the challenge when handling more complex reviews.", "For example, the sentence Watch this only if someone has a gun to your head ... maybe. is an apparent negative review for a human.", "However, our algorithm is hard to flip the sentiment of such reviews with no explicit casual terms.", "The technique on sarcasm and irony detection may have benefits for dealing with this challenge.", "We proposed a new framework to automatically generate counterfactual augmented data (CAD) for enhancing the robustness of sentiment analysis models.", "By combining the automatically generated CAD with the original training data, we can produce more robust classifiers.", "We further show that our methods can achieve better performance even when compared to models trained with human-generated counterfactuals.", "More importantly, our evaluation based on several datasets has demonstrated that models trained on the augmented data (original & automatic CAD) appear to be less affected by spurious patterns and generalize better to out-of-domain data.", "This suggests there exists a significant opportunity to explore the use of the CAD in a range of tasks (e.g., natural language inference, natural language understanding, and social bias correction.).", "Although the experiments in this paper are conducted only in the sentiment classification task, this study could be a good starting point to investigate the efficacy of automatically generated CAD for building robust systems in many NLP tasks, including Natural Language Inference (NLI), Named Entity Recognition (NER), Question Answering (QA) system, etc.", "We would like to thank Eoin Kenny and Prof. Mark Keane from Insight Centre for their helpful advice and discussion during this work.", "Also, we would like to thank the anonymous reviewers for their insightful comments and suggestions to help improve the paper.", "This publication has emanated from research conducted with the financial support of Science Foundation Ireland under Grant number 12/RC/2289 P2." ]
[ "abstain", "abstain", "abstain", "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "result", "objective", "method", "abstain", "objective", "abstain", "other", "other", "other", "other", "other", "other", "other", "abstain", "abstain", "other", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "method", "method", "abstain", "result", "method", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "method", "result", "objective", "abstain", "abstain", "other", "other", "other" ]
[ "Sequence-to-sequence constituent parsing requires a linearization to represent trees as sequences.", "Top-down tree linearizations, which can be based on brackets or shift-reduce actions, have achieved the best accuracy to date.", "In this paper, we show that these results can be improved by using an in-order linearization instead.", "Based on this observation, we implement an enriched in-order shift-reduce linearization inspired by Vinyals et al. (2015)'s approach, achieving the best accuracy to date on the English PTB dataset among fully-supervised single-model sequence-to-sequence constituent parsers.", "Finally, we apply deterministic attention mechanisms to match the speed of state-of-the-art transition-based parsers, thus showing that sequence-to-sequence models can match them, not only in accuracy, but also in speed.", "Sequence-to-sequence (seq2seq) neural architectures have proved useful in several NLP tasks, with remarkable success in some of them such as machine translation, but they lag behind the state of the art in others.", "In constituent parsing, seq2seq models still need to improve to be competitive in accuracy and efficiency with their main competitors: transition-based constituent parsers (Dyer et al., 2016; Liu and Zhang, 2017b; Fernandez-Gonzalez and Gomez-Rodrguez, 2019).", "Vinyals et al. (2015) laid the first stone in seq2seq constituent parsing, proposing a linearization of phrase-structure trees as bracketed sequences following a top-down strategy, which can be predicted from the input sequence of words by any off-the-shelf seq2seq framework.", "While this approach is very simple, its accuracy and efficiency are significantly behind the state of the art in the fully-supervised single-model scenario.", "Most attempts to improve this approach focused on modifying the neural network architecture, while keeping the top-down linearization strategy.", "As exceptions, Ma et al. (2017) and Liu and Zhang (2017a) proposed linearizations based on sequences of transition-based parsing actions instead of brackets.", "Ma et al. (2017) tried a bottom-up linearization, but they obtained worse results than top-down approaches.", "1 Liu and Zhang (2017a) kept the top-down strategy, but using transitions of the top-down transition system of Dyer et al. (2016) instead of a bracketed linearization, achieving a higher performance.", "In transition-based constituent parsing, an in-order algorithm has recently proved superior to the bottom-up and top-down approaches (Liu and Zhang, 2017b), but we know of no applications of this approach in seq2seq parsing.", "Contributions In this paper, we advance the understanding of linearizations for seq2seq parsing, and improve the state of the art, as follows: (1) we show that the superiority of a transition-based top-down linearization over a bracketing-based one observed by Liu and Zhang (2017a) does not hold when both are tested under the same framework.", "In fact, we show that the additional information provided by the larger vocabulary in the linearization of Vinyals et al. (2015) is beneficial to seq2seq predictions.", "(2) We implement a novel in-order transition-based linearization, based on the in-order transition system by Liu and Zhang (2017b), and manage to notably increase parsing accuracy with respect to previous approaches.", "(3) We enhance the in-order representation of parse trees by adding extra information following the shift-reduce version of the (Vinyals et al., 2015) linearization, obtaining state-of-the-art accuracy among seq2seq parsers 1 We also tested empirically that a bottom-up linearization is not suitable for seq2seq parsing and discarded that option.", "and on par with some well-known transition-based approaches.", "(4) We bridge the remaining gap with transition-based parsers parsing speed by applying a new variant of deterministic attention (Kami-gaito et al., 2017; Ma et al., 2017) to restrict the hidden states used to compute the attention vector, doubling the system's speed.", "The result is a seq2seq parser 2 that, for the first time, matches the speed and accuracy of transition-based parsers implemented under the same neural framework.", "(5) Using the neural framework of Dyer et al. (2015) as testing ground, we perform a homogeneous comparison among different seq2seq linearizations and widely-known transition-based parsers.", "To cast constituent parsing as seq2seq prediction, each parse tree needs to be represented as a sequence of symbols that can be predicted from an input sentence.", "Initially, Vinyals et al. (2015) proposed a top-down bracketed linearization of constituent trees, where opening and closing brackets include non-terminal labels and POS tags are normalized by replacing them with a tag XX.", "An example is shown in linearization a of Figure 1.", "As an alternative, Liu and Zhang (2017a) presented a shift-reduce linearization based on the top-down transition system defined for constituent parsing by Dyer et al. (2016) (example b in Figure 1).", "This provides three transitions that can be used on a stack and a buffer to build a constituent tree: a Shift transition to push words from the buffer into the stack, a Non Terminal -X transition to push a non-terminal node X into the stack, and a Reduce transition to pop elements from the stack until a non-terminal node is found and create a new subtree with all these elements as its children, pushing this new constituent into the stack.", "Following Vinyals et al. (2015)'s linearization where closing brackets also include the nonterminal label, we define an equivalent shift-reduce variant, where the Reduce transition is also parameterized with the non-terminal on top of the resulting subtree ( Reduce -X).", "In that way, we can one-to-one map opening brackets to Non Terminal -X transitions, closing brackets to Reduce -X actions and XX-tags to Shift transitions as shown in example c of Figure 1 .", "This enriched version will enlarge the vocabulary, but will also add some extra informa-2 Source code available at https://github.com/ danifg/InOrderSeq2seq .", "tion that, as we will see below, improves parsing accuracy.", "As an alternative to the top-down parser of (Dyer et al., 2016), Liu and Zhang (2017b) define a transition system based on in-order traversal, as in left-corner parsing (Rosenkrantz and Lewis, 1970): the non-terminal node on top of the tree being built is only considered after the first child is completed in the stack, building each subtree in a bottom-up manner, but choosing the non-terminal node on top before the new constituent is reduced.", "Transitions are the same as in the top-down algorithm (plus a Finish transition to terminate the parsing process), but the effect of applying a Reduce transition is different: it pops all elements from the stack until the first non-terminal node is found, which is also popped together with the preceding element in the stack to build a new constituent with all of them as children of the non-terminal node.", "3 This algorithm pushed state-of-the-art accuracies in shift-reduce constituent parsing; and, as we show in Section 4, it can be succesfully applied as a linearization method for seq2seq constituent parsing.", "Sequence d in Figure 1 exemplifies in-order linearization.", "Similarly to the enriched top-down variant, we also extend the in-order shift-reduce linearization by parametrizing Reduce transitions.", "Additionally, we can also add extra information to Shift transitions.", "(Suzuki et al., 2018) leaves POS tags of punctuation symbols out of the normalization proposed by Vinyals et al. (2015) without further explanation, but possibly they consider it can help seq2seq models.", "We adapt this idea to our novel enriched in-order linearization and lexicalize Shift transitions when a . or a , are pushed into the stack as Shift . and Shift , , respectively.", "4 In our experiments, we see that lexicalizing Shift transitions has indeed an impact on parsing performance.", "In Figure 1 and sequence e , we include an example of this linearization technique.", "Note that, although we use a transition-based linearization of parse trees, our approach is agnostic to the stack structure and the parsing process is performed by a simple seq2seq model that straightforwardly translates input sequences of words into sequences of shift-reduce actions.", "3 See Appendix A for more details about the top-down and in-order transition systems.", "4 We do not lexicalize Shift transitions on the enriched shift-reduce top-down variant to perform a fair comparison against the original linearization by Liu and Zhang (2017a).", "Baseline Model In our experiments, we test all proposed linearizations in the seq2seq neural architecture designed by Liu and Zhang (2017a) and implemented on the framework developed by Dyer et al. (2015).", "This architecture proved to outperform the majority of seq2seq approaches, even without implementing beam search (which penalizes parsing speed).", "The difference with respect to the vanilla seq2seq configuration (Vinyals et al., 2015) is that two separate attention models are used to cover two different and variable segments of the input.", "This provides improvements in accuracy, regardless of the linearization method used.", "More specifically, Liu and Zhang (2017a) follow the common practice in stack-LSTM-based shift-reduce parsers (Dyer et al., 2015, 2016; Liu and Zhang, 2017b) that uses a concatenation of pretrained word embeddings ( e w i ) and randomly initialized word ( e w i ) and POS tag embeddings ( e p i ) to derive (through a ReLu non-linear function) the final representation x i of the i th input word: x i = relu ( W enc [ e w i , e w i , e p i ] + b enc ) where W enc and b enc are model parameters, and w i and p i represent the form and the POS tag of the i th input word.", "This representation x i is fed into the encoder (implemented by a BiLSTM) to output an encoder hidden state h i : h i = [ h l i ; h r i ] = BiLSTM ( x i ) .", "As a decoder, a LSTM generates a sequence of decoder hidden states from which a sequence of actions is predicted.", "Concretely, the current decoder hidden state d j is computed by: d j = relu ( W dec [ d j 1 , l att j , r att j ] + b dec ) where W dec and b dec are model parameters, d j 1 is the previous decoder hidden state, and l att j and r att j are the resulting attention vectors over the left and right segments, respectively, of encoder hidden states h 1 . . . h n .", "These two segments of the input are defined by index p , which is initialized to the beginning of the sentence and moves one position to the right each time a Shift transition is applied.", "Therefore, l att j and r att j are computed at timestep j as: l att j = p (cid:88) i =1 ij h i , r att j = n (cid:88) i = p +1 ij h i , where ij = exp ( ij ) (cid:80) n k =1 exp ( kj ) and ij = UT tanh ( W att [ h i ; d j 1 ] + b att ) Then, the current token y j is predicted from d j as: p ( y j | d j ) = softmax ( W pred d j + b pred ) , where W att , b att , W pred and b pred are parameters.", "In Figure 2, we graphically describe the neural architecture.", "Note that current state-of-the-art transition-based parsers, which rely on stack-LSTMs to represent Figure 2: Sequence-to-sequence neural architecture proposed by Liu and Zhang (2017a).", "the stack structure, are also implemented under the framework by Dyer et al. (2015) and, therefore, our approach can be fairly compared to them in terms of accuracy and speed.", "Deterministic Attention Previous work (Kami-gaito et al., 2017; Ma et al., 2017; Liu et al., 2018) claims that using deterministic attention mechanisms instead of the standard probabilistic variant leads to accuracy and speed gains.", "We propose a simple and effective procedure to implement deterministic attention in the architecture by Liu and Zhang (2017a), substantially reducing the time consumed by the decoder to predict the next token.", "Apart from dividing the sequence of encoder hidden states into segments, Liu and Zhang (2017a) provide explicit alignment between the input word sequence and the output transition sequence by keeping the index p that indicates a correspondence between input words and Shift transitions.", "This information can be used to force the model to focus on those encoder hidden states that are more informative for decoding at each timestep, avoiding going through the whole input to compute the attention vector, and thus considerably reducing decoding time.", "To gain some insight on what input words are most relevant, we study on the dev set the attention values assigned by the model to each encoder hidden state and the frequency with which each of them achieves the highest value at each timestep.", "Surprisingly, we found out that, for the top-down parser, almost 90% of the time the highest attention values were assigned to the words in positions p and p +1 by a wide margin.", "For the in-order parser, words in those positions also received considerable attention values, but they were determinant only 75% of the time.", "Following these results, we propose a computation of l att j and r att j where only the encoder hidden states in the rightmost position ( p ) of the left segment and in the leftmost position ( p + 1 ) of the right segment are considered: l att j = pj h p , r att j = p +1 j h p +1 This change avoids calculating the weight ij for each encoder hidden state, as needed in probabilistic attention.", "Attention vectors are computed in constant time, notably reducing running time while keeping the accuracy, as shown in our experiments.", "We test the proposed approaches on the PTB treebank (Marcus et al., 1993) with standard splits.", "5 Table 1 compares parsing accuracy of all linearizations proposed in Section 2 to state-of-the-art fully-supervised transition-based constituent parsing models.", "The results show that our enriched in-order linearization is the most suitable option implemented so far for seq2seq constituent parsing, outperforming all existing seq2seq approaches (even without beam-search decoding) and matching some transition-based models.", "We also demonstrate that the enriched top-down variant (equivalent to the bracketed (Vinyals et al., 2015)'s linearization) outperforms the regular top-down approach of Liu and Zhang (2017a).", "A trend that can also be seen in the 5 Settings are detailed in Appendix A.3.", "in-order linearization, where the addition of more tokens (parametrized Reduce and lexicalized Shift transitions) to the vocabulary benefits model performance (a gain of 0.4 F-score points), meaning that seq2seq models make use of this additional information.", "In fact, we analysed the average length of output sequences and noticed that enriched variants with larger vocabulary tend to produce shorter sequences.", "We hypothesize that the extra information is helping the model to better contextualize tokens in the sequence during training, minimizing the prediction of wrong tokens at decoding time.", "Finally, we extend the implementation by Liu and Zhang (2017a) with 10-beam-search decoding and increase F-score by 0.3 points.", "We also evaluate parsing speeds under the exact same conditions among our approach and the top-down (Dyer et al., 2016) and in-order (Liu and Zhang, 2017b) transition-based constituent parsers, implemented in the framework by Dyer Parser", "et al. (2015).", "6 Table 2 shows how the proposed deterministic attention technique doubles the speed of the baseline model, putting it on par with stack-LSTM-based shift-reduce systems, which are considered one of the most efficient approaches for constituent parsing.", "We can also see from Table 1 that the presented mechanism is more beneficial in terms of accuracy for the top-down algorithm (increasing 0.2 points in F-score) than the in-order variant (suffering a drop of 0.1 points in F-score), as could be expected from our previous analysis of attention vectors.", "Finally, at the bottom of Table 1, we show current state-of-the-art chart-based parsers.", "These approaches, while more accurate, are significantly slower than seq2seq and transition-based parsers, being less appealing for downstream applications where the speed is crucial.", "We present significant accuracy and speed improvements in seq2seq constituent parsing.", "The proposed linearization techniques can be used by any off-the-self seq2seq model without building a specific algorithm or structure.", "In addition, any advances in seq2seq neural architectures or pretrained transformer-based language models (Devlin et al., 2019) can be directly used to enhance our approach.", "This work has received funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01, ED431G 2019/01).", "6 Please note that the implementation by Dyer et al. (2015) is not optimized for speed, but it can be used as a common framework to compare different approaches." ]
[ "abstain", "abstain", "result", "result", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "other", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "other", "abstain", "other", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "result", "abstain", "result", "abstain", "method", "other", "other" ]
[ "This paper introduces a new task of politeness transfer which involves converting non-polite sentences to polite sentences while preserving the meaning.", "We also provide a dataset of more than 1.39 million instances automatically labeled for politeness to encourage benchmark evaluations on this new task.", "We design a tag and generate pipeline that identifies stylistic attributes and subsequently generates a sentence in the target style while preserving most of the source content.", "For politeness as well as five other transfer tasks, our model outperforms the state-of-the-art methods on automatic metrics for content preservation, with a comparable or better performance on style transfer accuracy.", "Additionally, our model surpasses existing methods on human evaluations for grammaticality, meaning preservation and transfer accuracy across all the six style transfer tasks.", "The data and code is located at https:// github.com/tag-and-generate/ 1 Introduction Politeness plays a crucial role in social interaction, and is closely tied with power dynamics, social distance between the participants of a conversation, and gender (Brown et al., 1987; Danescu-Niculescu-Mizil et al., 2013).", "It is also imperative to use the appropriate level of politeness for smooth communication in conversations (Coppock, 2005), organizational settings like emails (Peterson et al., 2011), memos, official documents, and many other settings.", "Notably, politeness has also been identi-fied as an interpersonal style which can be decoupled from content (Kang and Hovy, 2019).", "Motivated by its central importance, in this paper we study the task of converting non-polite sentences to polite sentences while preserving the meaning.", "Rao and Tetreault, 2018; Xu et al., 2012; Jham-tani et al., 2017) has not focused on politeness as a style transfer task, and we argue that defining it is cumbersome.", "While native speakers of a language and cohabitants of a region have a good working understanding of the phenomenon of politeness for everyday conversation, pinning it down as a definition is non-trivial (Meier, 1995).", "There are primarily two reasons for this complexity.", "First, as noted by (Brown et al., 1987), the phenomenon of politeness is rich and multifaceted.", "Second, politeness of a sentence depends on the culture, language, and social structure of both the speaker and the addressed person.", "For instance, while using please in requests made to the closest friends is common amongst the native speakers of North American English, such an act would be considered awkward, if not rude, in the Arab culture (Kadar and Mills, 2011).", "We circumscribe the scope of politeness for the purpose of this study as follows: First, we adopt the data driven definition of politeness proposed by (Danescu-Niculescu-Mizil et al., 2013).", "Second, we base our experiments on a dataset derived from the Enron corpus (Klimt and Yang, 2004) which consists of email exchanges in an American corporation.", "Thus, we restrict our attention to the notion of politeness as widely accepted by the speakers of North American English in a formal setting.", "Even after framing politeness transfer as a task, there are additional challenges involved that differentiate politeness from other styles.", "Consider a common directive in formal communication, send me the data.", "While the sentence is not impolite, a rephrasing could you please send me the data would largely be accepted as a more polite way of phrasing the same statement (Danescu-Niculescu-Mizil et al., 2013).", "This example brings out a distinct characteristic of politeness.", "It is easy to pinpoint the signals for politeness .", "However, cues that signal the absence of politeness, like direct questions, statements and factuality (Danescu-Niculescu-Mizil et al., 2013), do not explicitly appear in a sentence, and are thus hard to objectify.", "Further, the other extreme of politeness, impolite sentences, are typically riddled with curse words and insulting phrases.", "While interesting, such cases can typically be neutralized using lexicons.", "For our study, we focus on the task of transferring the non-polite sentences to polite sentences, where we simply define non-politeness to be the absence of both politeness and impoliteness.", "Note that this is in stark contrast with the standard style transfer tasks, which involve transferring a sentence from a well-defined style polarity to the other (like positive to negative sentiment).", "We propose a tag and generate pipeline to overcome these challenges.", "The tagger identifies the words or phrases which belong to the original style and replaces them with a tag token.", "If the sentence has no style attributes, as in the case for politeness transfer, the tagger adds the tag token in positions where phrases in the target style can be inserted.", "The generator takes as input the output of the tagger and generates a sentence in the target style.", "Additionally, unlike previous systems, the outputs of the intermediate steps in our system are fully realized, making the whole pipeline interpretable.", "Finally, if the input sentence is already in the target style, our model won't add any stylistic markers and thus would allow the input to flow as is.", "We evaluate our model on politeness transfer as well as 5 additional tasks described in prior work (Shen et al., 2017; Prabhumoye et al., 2018; Li et al., 2018) on content preservation, fluency and style transfer accuracy.", "Both automatic and human evaluations show that our model beats the state-of-the-art methods in content preservation, while either matching or improving the transfer accuracy across six different style transfer tasks( 5).", "The results show that our technique is effective across a broad spectrum of style transfer tasks.", "Our methodology is inspired by Li et al. (2018) and improves upon several of its limitations as described in ( 2).", "Our main contribution is the design of politeness transfer task.", "To this end, we provide a large dataset of nearly 1.39 million sentences labeled for politeness ( https://github.com/tag-and-generate/ politeness-dataset ).", "Additionally, we hand curate a test set of 800 samples (from Enron emails) which are annotated as requests.", "To the best of our knowledge, we are the first to undertake politeness as a style transfer task.", "In the process, we highlight an important class of problems wherein the transfer involves going from a neutral style to the target style.", "Finally, we design a tag and generate pipeline that is particularly well suited for tasks like politeness, while being general enough to match or beat the performance of the existing systems on popular style transfer tasks.", "Politeness and its close relation with power dynamics and social interactions has been well documented (Brown et al., 1987).", "Recent work (Danescu-Niculescu-Mizil et al., 2013) in computational linguistics has provided a corpus of requests annotated for politeness curated from Wikipedia and StackExchange.", "Niu and Bansal (2018) uses this corpus to generate polite dialogues.", "Their work focuses on contextual dialogue response generation as opposed to content preserving style transfer, while the latter is the central theme of our work.", "Prior work on Enron corpus (Yeh and Harnly, 2006) has been mostly from a socio-linguistic perspective to observe social power dynamics (Bramsen et al., 2011; McCallum et al., 2007), formality (Pe-terson et al., 2011) and politeness (Prabhakaran et al., 2014).", "We build upon this body of work by using this corpus as a source for the style transfer task.", "Prior work on style transfer has largely focused on tasks of sentiment modification (Hu et al., 2017; Shen et al., 2017; Li et al., 2018), caption transfer (Li et al., 2018), persona transfer (Chandu et al., 2019; Zhang et al., 2018), gender and political slant transfer (Reddy and Knight, 2016; Prabhumoye et al., 2018), and formality transfer (Rao and Tetreault, 2018; Xu et al., 2019).", "Note that formality and politeness are loosely connected but independent styles (Kang and Hovy, 2019).", "We focus our efforts on carving out a task for politeness transfer and creating a dataset for such a task.", "Current style transfer techniques (Shen et al., 2017; Hu et al., 2017; Fu et al., 2018; Yang et al., 2018; John et al., 2019) try to disentangle source style from content and then combine the content with the target style to generate the sentence in the target style.", "Compared to prior work, Delete, Retrieve and Generate (Li et al., 2018) (referred to as DRG henceforth) and its extension (Sudhakar et al., 2019) are effective methods to generate outputs in the target style while having a relatively high rate of source content preservation.", "However, DRG has several limitations: (1) the delete module often marks content words as stylistic markers and deletes them, (2) the retrieve step relies on the presence of similar content in both the source and target styles, (3) the retrieve step is time consuming for large datasets, (4) the pipeline makes the assumption that style can be transferred by deleting stylistic markers and replacing them with target style phrases, (5) the method relies on a fixed corpus of style attribute markers, and is thus limited in its ability to generalize to unseen data during test time.", "Our methodology differs from these works as it does not require the retrieve stage and makes no assumptions on the existence of similar content phrases in both the styles.", "This also makes our pipeline faster in addition to being robust to noise.", "Wu et al. (2019) treats style transfer as a conditional language modelling task.", "It focuses only on sentiment modification, treating it as a cloze form task of filling in the appropriate words in the target sentiment.", "In contrast, we are capable of generating the entire sentence in the target style.", "Further, our work is more generalizable and we show results on five other style transfer tasks.", "For the politeness transfer task, we focus on sentences in which the speaker communicates a requirement that the listener needs to fulfill.", "Common examples include imperatives Let's stay in touch and questions that express a proposal Can you call me when you get back? .", "Following Ju-rafsky et al. (1997), we use the umbrella term action-directives for such sentences.", "The goal of this task is to convert action-directives to polite requests.", "While there can be more than one way of making a sentence polite, for the above examples, adding gratitude ( Thanks and let's stay in touch) or counterfactuals ( Could you please call me when you get back?) would make them polite (Danescu-Niculescu-Mizil et al., 2013).", "Data Preparation The Enron corpus (Klimt and Yang, 2004) consists of a large set of email conversations exchanged by the employees of the Enron corporation.", "Emails serve as a medium for exchange of requests, serving as an ideal application for politeness transfer.", "We begin by pre-processing the raw Enron corpus following Shetty and Adibi (2004).", "The first set of pre-processing 1 steps and de-duplication yielded a corpus of roughly 2.5 million sentences.", "Further pruning 2 led to a cleaned corpus of over 1.39 million sentences.", "Finally, we use a politeness classifier (Niu and Bansal, 2018) to assign politeness scores to these sentences and filter them into ten buckets based on the score (P 0 P 9 ; Fig. 1).", "All the buckets are further divided into train, test, and dev splits (in a 80:10:10 ratio).", "For our experiments, we assumed all the sentences with a politeness score of over 90% by the classifier to be polite, also referred as the P 9 bucket (marked in green in Fig. 1).", "We use the train-split of the P 9 bucket of over 270K polite sentences as the training data for the politeness transfer task.", "Since the goal of the task is making action directives more polite, we manually curate a test set comprising of such sentences from test splits across the buckets.", "We first train a classifier on the switchboard corpus (Jurafsky et al., 1997) to get dialog state tags and filter sentences that have been labeled as either action-directive or quotation.", "3 Further, we use human annotators to manually select the test sentences.", "The annotators had a Fleiss's Kappa score ( ) of 0.77 4 and curated a final test set of 800 sentences.", "In Fig. 2, we examine the two extreme buckets with politeness scores of < 10% (P 0 bucket) and > 90% (P 9 bucket) from our corpus by plotting", "1 Pre-processing also involved steps for tokenization (done using spacy (Honnibal and Montani, 2017)) and conversion to lower case.", "2 We prune the corpus by removing the sentences that 1) were less than 3 words long, 2) had more than 80% numerical tokens, 3) contained email addresses, or 4) had repeated occurrences of spurious characters.", "3 We used AWD-LSTM based classifier for classification of action-directive.", "4 The score was calculated for 3 annotators on a sample set of 50 sentences.", "10 of the top 30 words occurring in each bucket.", "We clearly notice that words in the P 9 bucket are closely linked to polite style, while words in the P 0 bucket are mostly content words.", "This substantiates our claim that the task of politeness transfer is fundamentally different from other attribute transfer tasks like sentiment where both the polarities are clearly defined.", "We use this dataset to perform transfer between these styles.", "This task parallels the task of politeness transfer because much like in the case of politeness transfer, the captions task also involves going from a style neutral (factual) to a style rich (humorous or romantic) parlance.", "For sentiment transfer, we use the Yelp restaurant review dataset (Shen et al., 2017) to train, and evaluate on a test set of 1000 sentences released by Li et al. (2018).", "We also use the Amazon dataset of product reviews (He and McAuley, 2016).", "We use the Yelp review dataset labelled for the Gender of the author, released by Prabhumoye et al. (2018) compiled from Reddy and Knight (2016).", "For the Political slant task (Prabhumoye et al., 2018), we use dataset released by Voigt et al. (2018).", "We are given non-parallel samples of sentences X 1 = { x (1)1 . . . x (1) n } and X 2 = { x (2)1 . . . x (2) m } from styles S 1 and S 2 respectively.", "The objective of the task is to efficiently generate samples X 1 = { x (2)1 . . . x (2) n } in the target style S 2 , conditioned on samples in X 1 .", "For a style S v where v { 1 , 2 } , we begin by learning a set of phrases ( v ) which characterize the style S v .", "The presence of phrases from v in a sentence x i would associate the sentence with the style S v .", "For example, phrases like pretty good and worth every penny are characteristic of the positive style in the case of sentiment transfer task.", "We propose a two staged approach where we first infer a sentence z ( x i ) from x (1) i using a model, the tagger.", "The goal of the tagger is to ensure that the sentence z ( x i ) is agnostic to the original style ( S 1 ) of the input sentence.", "Conditioned on z ( x i ) , we then generate the transferred sentence x (2) i in the target style S 2 using another model, the generator.", "The intermediate variable z ( x i ) is also seen in other style-transfer methods.", "Shen et al. (2017); Prabhumoye et al. (2018); Yang et al. (2018); Hu et al. (2017) transform the input x ( v ) i to a latent representation z ( x i ) which (ideally) encodes the content present in x ( v ) i while being agnostic to style S v .", "In these cases z ( x i ) encodes the input sentence in a continuous latent space whereas for us z ( x i ) manifests in the surface form.", "The ability of our pipeline to generate observable intermediate outputs z ( x i ) makes it somewhat more interpretable than those other methods.", "We train two independent systems for the tagger & generator which have complimentary objectives.", "The former identifies the style attribute markers a ( x (1) i ) from source style S 1 and either replaces them with a positional token called [ TAG ] or merely adds these positional tokens without removing any phrase from the input x (1) i .", "This particular capability of the model enables us to generate these tags in an input that is devoid of any attribute marker (i.e. a ( x (1) i ) = {} ).", "This is one of the major differences from prior works which mainly focus on removing source style attributes and then replacing them with the target style attributes.", "It is especially critical for tasks like politeness transfer where the transfer takes place from a non-polite sentence.", "This is because in such cases we may need to add new phrases to the sentence rather than simply replace existing ones.", "The generator is trained to generate sentences x (2) i in the target style by replacing these [ TAG ] tokens with stylistically relevant words inferred from target style S 2 .", "Even though we have non-parallel corpora, both systems are trained in a supervised fashion as sequence-to-sequence models with their own distinct pairs of inputs & outputs.", "To create parallel training data, we first estimate the style markers v for a given style S v & then use these to curate style free sentences with [ TAG ] Figure 3: Our proposed approach: tag and generate .", "tokens.", "Training data creation details are given in sections 4.2, 4.3.", "Fig. 3 shows the overall pipeline of the proposed approach.", "In the first example x (1)1 , where there is no clear style attribute present, our model adds the [ TAG ] token in z ( x 1 ) , indicating that a target style marker should be generated in this position.", "On the contrary, in the second example, the terms ok and bland are markers of negative sentiment and hence the tagger has replaced them with [ TAG ] tokens in z ( x 2 ) .", "We can also see that the inferred sentence in both the cases is free of the original and target styles.", "The structural bias induced by this two staged approach is helpful in realizing an interpretable style free tagged sentence that explicitly encodes the content.", "In the following sections we discuss in detail the methodologies involved in (1) estimating the relevant attribute markers for a given style, (2) tagger, and (3) generator modules of our approach.", "Drawing from Li et al. (2018), we propose a simple approach based on n-gram tf-idfs to estimate the set v , which represents the style markers for style v .", "For a given corpus pair X 1 , X 2 in styles S 1 , S 2 respectively we first compute a probability distribution p 21 ( w ) over the n-grams w present in both the corpora (Eq. 2).", "Intuitively, p 21 ( w ) is proportional to the probability of sampling an n-gram present in both X 1 , X 2 but having a much higher tf-idf value in X 2 relative to X 1 .", "This is how we define the impactful style markers for style S 2 .", "where, 21 ( w ) is the ratio of the mean tf-idfs for a given n-gram w present in both X 1 , X 2 with", "| X 1 | = n and | X 2 | = m .", "Words with higher values for 21 ( w ) have a higher mean tf-idf in X 2 vs X 1 , and thus are more characteristic of S 2 .", "We further smooth and normalize 21 ( w ) to get p 21 ( w ) .", "Finally, we estimate 2 by 2 = { w : p 21 ( w ) k } In other words, 2 consists of the set of phrases in X 2 above a given style impact k .", "1 is computed similarly where we use p 12 ( w ) , 12 ( w ) .", "The tagger model (with parameters t ) takes as input the sentences in X 1 and outputs { z ( x i ) : x (1) i X 1 } .", "Depending on the style transfer task, the tagger is trained to either (1) identify and replace style attributes a ( x (1) i ) with the token tag [ TAG ] (replace-tagger) or (2) add the [ TAG ] token at specific locations in x (1) i (add-tagger).", "In both the cases, the [ TAG ] tokens indicate positions where the generator can insert phrases from the target style S 2 .", "Finally, we use the distribution p 21 ( w ) / p 12 ( w ) over 2 / 1 ( 4.1) to draw samples of attribute-markers that would be replaced with the [ TAG ] token during the creation of training data.", "The first variant, replace-tagger, is suited for a task like sentiment transfer where almost every sentence has some attribute markers a ( x (1) i ) present in it.", "In this case the training data comprises of pairs where the input is X 1 and the output is { z ( x i ) : x (1) i X 1 } .", "The loss objective for replace-tagger is given by L r ( t ) in Eq.", "3. L r ( t ) = | X 1 | (cid:88) i =1 log P t ( z ( x i ) | x (1) i ; t ) (3) The second variant, add-tagger, is designed for cases where the transfer needs to happen from style neutral sentences to the target style.", "That is, X 1 consists of style neutral sentences whereas X 2 consists of sentences in the target style.", "Examples of such a task include the tasks of politeness transfer (introduced in this paper) and caption style transfer (used by Li et al. (2018)).", "In such cases, since the source sentences have no attribute markers to remove, the tagger learns to add [ TAG ] tokens at specific locations suitable for emanating style words in the target style.", "The training data (Fig. 4) for the add-tagger is given by pairs where the input is { x (2) i \\ a ( x (2) i ) : x (2) i X 2 } and the output is { z ( x i ) : x (2) i X 2 } .", "Essentially, for the input we take samples x (2) i in the target style S 2 and explicitly remove style phrases a ( x (2) i ) from it.", "For the output we replace the same phrases a ( x (2) i ) with [ TAG ] tokens.", "As indicated in Fig. 4, we remove the style phrases you would like to and please and replace them with [ TAG ] in the output.", "Note that we only use samples from X 2 for training the add-tagger; samples from the style neutral X 1 are not involved in the training process at all.", "For example, in the case of politeness transfer, we only use the sentences labeled as polite for training.", "In effect, by training in this fashion, the tagger learns to add [ TAG ] tokens at appropriate locations in a style neutral sentence.", "The loss objective ( L a ) given by Eq.", "4 is crucial for tasks like politeness transfer where one of the styles is poorly defined.", "The training for the generator model is complimentary to that of the tagger, in the sense that the generator takes as input the tagged output z ( x i ) inferred from the source style and modifies the [ TAG ] tokens to generate the desired sentence x ( v ) i in the target style S v .", "L ( g ) = | X v | (cid:88) i =1 log P g ( x ( v ) i | z ( x i ); g ) (5) The training data for transfer into style S v comprises of pairs where the input is given by { z ( x i ) : x ( v ) i X v , v { 1 , 2 }} and the output is X v , i.e. it is trained to transform a style agnostic representation into a style targeted sentence.", "Since the generator has no notion of the original style and it is only concerned with the style agnostic representation z ( x i ) , it is convenient to disentangle the training for tagger & generator.", "Finally, we note that the location at which the tags are generated has a significant impact on the distribution over style attributes (in 2 ) that are used to fill the [ TAG ] token at a particular position.", "Hence, instead of using a single [ TAG ] token, we use a set of positional tokens [ TAG ] t where t { 0 , 1 , . . . T } for a sentence of length T .", "By training both tagger and generator with these positional [ TAG ] t tokens we enable them to easily realize different distributions of style attributes for different positions in a sentence.", "For example, in the case of politeness transfer, the tags added at the beginning ( t = 0 ) will almost always be used to generate a token like Would it be possible ... whereas for a higher t , [ TAG ] t may be replaced with a token like thanks or sorry. 5 Experiments and Results Baselines We compare our systems against three previous methods.", "DRG (Li et al., 2018), Style Transfer Through Back-translation ( BST ) (Prabhu-moye et al., 2018), and Style transfer from nonparallel text by cross alignment (Shen et al., 2017) ( CAE ).", "For DRG , we only compare against the best reported method, delete-retrieve-generate.", "For all the models, we follow the experimental setups described in their respective papers.", "Implementation Details We use 4-layered transformers (Vaswani et al., 2017) to train both tagger and generator modules.", "Each transformer has 4 attention heads with a 512 dimensional embedding layer and hidden state size.", "Dropout (Sri-vastava et al., 2014) with p-value 0 .", "3 is added for each layer in the transformer.", "For the politeness dataset the generator module is trained with data augmentation techniques like random word shuf-fle, word drops/replacements as proposed by (Im Politeness Gender Political Acc BL-s MET ROU Acc BL-s MET ROU ACC BL-s MET ROU CAE 99.62 6.94 10.73 25.71 65.21 9.25 14.72 42.42 77.71 3.17 7.79 27.17 BST 60.75 2.55 9.19 18.99 54.4 20.73 22.57 55.55 88.49 10.71 16.26 41.02 DRG 90.25 11.83 18.07 41.09 36.29 22.9 22.84 53.30 69.79 25.69 21.6 51.8 OURS 89.50 70.44 36.26 70.99 82.21 52.76 37.42 74.59 87.74 68.44 45.44 77.51 Table 1: Results on the Politeness, Gender and Political datasets. et al., 2017).", "We empirically observed that these techniques provide an improvement in the fluency and diversity of the generations.", "Both modules were also trained with the BPE tokenization (Sen-nrich et al., 2015) using a vocabulary of size 16000 for all the datasets except for Captions, which was trained using 4000 BPE tokens.", "The value of the smoothing parameter in Eq.", "2 is set to 0 .", "75 .", "For all datasets except Yelp we use phrases with p 21 ( w ) k = 0 .", "9 to construct 2 , 1 ( 4.1).", "For Yelp k is set to 0 .", "97 .", "During inference we use beam search (beam size=5) to decode tagged sentences and targeted generations for tagger & generator respectively.", "For the tagger, we re-rank the final beam search outputs based on the number of [ TAG ] tokens in the output sequence (favoring more [ TAG ] tokens).", "Automated Evaluation Following prior work (Li et al., 2018; Shen et al., 2017), we use automatic metrics for evaluation of the models along two major dimensions: (1) style transfer accuracy and (2) content preservation.", "To capture accuracy, we use a classifier trained on the nonparallel style corpora for the respective datasets (barring polite-ness).", "The architecture of the classifier is based on AWD-LSTM (Merity et al., 2017) and a softmax layer trained via cross-entropy loss.", "We use the implementation provided by fastai.", "5 For politeness, we use the classifier trained by (Niu and Bansal, 2018).", "6 The metric of transfer accuracy (Acc) is defined as the percentage of generated sentences classified to be in the target domain by the classifier.", "The standard metric for measuring content preservation is BLEU -self (BL-s) (Papineni et al., 2002) which is computed with respect to the original sentences.", "Additionally, we report the BLEU -reference (BL-r) scores using the human reference sentences on the Yelp, Amazon and Captions datasets (Li et al., 2018).", "We also report ROUGE (ROU) (Lin, 2004) and METEOR ( MET ) (Denkowski and Lavie, 5 https://docs.fast.ai/ 6 This is trained on the dataset given by (Danescu-Niculescu-Mizil et al., 2013).", "2011) scores.", "In particular, METEOR also uses synonyms and stemmed forms of the words in candidate and reference sentences, and thus may be better at quantifying semantic similarities.", "Table 1 shows that our model achieves significantly higher scores on BLEU , ROUGE and METEOR as compared to the baselines DRG , CAE and BST on the Politeness, Gender and Political datasets.", "The BLEU score on the Politeness task is greater by 58 .", "61 points with respect to DRG .", "In general, CAE and BST achieve high classifier accuracies but they fail to retain the original content.", "The classifier accuracy on the generations of our model are comparable (within 1% ) with that of DRG for the Politeness dataset.", "In Table 2, we compare our model against CAE and DRG on the Yelp, Amazon, and Captions datasets.", "For each of the datasets our test set comprises 500 samples (with human references) curated by Li et al. (2018).", "We observe an increase in the BLEU -reference scores by 5 .", "25 , 4 .", "95 and 3 .", "64 on the Yelp, Amazon, and Captions test sets respectively.", "Additionally, we improve the transfer accuracy for Amazon by 14 .", "2% while achieving accuracies similar to DRG on Yelp and Captions.", "As noted by Li et al. (2018), one of the unique aspects of the Amazon dataset is the absence of similar content in both the sentiment polarities.", "Hence, the performance of their model is worse in this case.", "Since we don't make any such assumptions, we perform significantly better on this dataset.", "While popular, the metrics of transfer accuracy and BLEU have significant shortcomings making them susceptible to simple adversaries.", "BLEU relies heavily on n-gram overlap and classifiers can be fooled by certain polarizing keywords.", "We test this hypothesis on the sentiment transfer task by a Naive Baseline .", "This baseline adds but overall it sucked at the end of the sentence to transfer it to negative sentiment.", "Similarly, it appends but overall it was perfect for transfer into a positive sentiment.", "This baseline achieves an average accuracy score of 91 .", "3% and a BLEU score of 61 .", "44 on the Yelp Yelp Amazon Captions Acc BL-s BL-r MET ROU Acc BL-s BL-r MET ROU Acc BL-s BL-r MET ROU CAE 72.1 19.95 7.75 21.70 55.9 78 2.64 1.68 9.52 29.16 89.66 2.09 1.57 9.61 30.02 DRG 88.8 36.69 14.51 32.09 61.06 52.2 57.07 29.85 50.16 79.31 95.65 31.79 11.78 32.45 64.32 OURS 86.6 47.14 19.76 36.26 70.99 66.4 68.74 34.80 45.3 83.45 93.17 51.01 15.63 43.67 79.51 Table 2: Results on the Yelp, Amazon and Captions datasets.", "dataset.", "Despite high evaluation scores, it does not reflect a high rate of success on the task.", "In summary, evaluation via automatic metrics might not truly correlate with task success.", "Changing Content Words Given that our model is explicitly trained to generate new content only in place of the TAG token, it is expected that a well-trained system will retain most of the non-tagged (content) words.", "Clearly, replacing content words is not desired since it may drastically change the meaning.", "In order to quantify this, we calculate the fraction of non-tagged words being changed across the datasets.", "We found that the non-tagged words were changed for only 6.9% of the sentences.", "In some of these cases, we noticed that changing non-tagged words helped in producing outputs that were more natural and fluent.", "Human Evaluation Following Li et al. (2018), we select 10 unbiased human judges to rate the output of our model and DRG on three aspects: (1) content preservation (Con) (2) grammaticality of the generated content (Gra) (3) target attribute match of the generations (Att) .", "For each of these metrics, the reviewers give a score between 1-5 to each of the outputs, where 1 reflects a poor performance on the task and 5 means a perfect output.", "Since the judgement of signals that indicate gender and political inclination are prone to personal biases, we don't annotate these tasks for target attribute match metric.", "Instead we rely on the classifier scores for the transfer.", "We've used the same instructions from Li et al. (2018) for our human study.", "Overall, we evaluate both systems on a total of 200 samples for Politeness and 100 samples each for Yelp, Gender and Political.", "Table 3 shows the results of human evaluations.", "We observe a significant improvement in content preservation scores across various datasets (specifi-cally in Politeness domain) highlighting the ability of our model to retain content better than DRG .", "Alongside, we also observe consistent improvements of our model on target attribute matching and grammatical correctness.", "Qualitative Analysis We compare the results of our model with the DRG model qualitatively as shown in Table", "4. Our analysis is based on the linguistic strategies for politeness as described in (Danescu-Niculescu-Mizil et al., 2013).", "The first sentence presents a simple example of the counterfactual modal strategy inducing Could you please to make the sentence polite.", "The second sentence highlights another subtle concept of politeness of 1st Person Plural where adding we helps being indirect and creates the sense that the burden of the request is shared between speaker and addressee.", "The third sentence highlights the ability of the model to add Apologizing words like Sorry which helps in deflecting the social threat of the request by attuning to the imposition.", "According to the Please Start strategy, it is more direct and insincere to start a sentence with Please .", "The fourth sentence projects the case where our model uses thanks at the end to express gratitude and in turn, makes the sentence more polite.", "Our model follows the strategies prescribed in (Danescu-Niculescu-Mizil et al., 2013) while generating polite sentences.", "7 Ablations We provide a comparison of the two variants of the tagger, namely the replace-tagger and add-tagger on two datasets.", "We also train and compare them with a combined variant.", "8 We train these tagger variants on the Yelp and Captions datasets and present the results in Table", "5. We observe that for Captions, where we transfer a factual (neutral) to romantic/humorous sentence, the add-7 We provide additional qualitative examples for other tasks in the supplementary material.", "8 Training of combined variant is done by training the tagger model on the concatenation of training data for addtagger and replace-tagger.", "tagger provides the best accuracy with a relatively negligible drop in BLEU scores.", "On the contrary, for Yelp, where both polarities are clearly defined, the replace-tagger gives the best performance.", "Interestingly, the accuracy of the add-tagger is 50% in the case of Yelp, since adding negative words to a positive sentence or vice-versa neutralizes the classifier scores.", "Thus, we can use the add-tagger variant for transfer from a polarized class to a neutral class as well.", "To check if the combined tagger is learning to perform the operation that is more suitable for a dataset, we calculate the fraction of times the combined tagger performs add/replace operations on the Yelp and Captions datasets.", "We find that for Yelp (a polar dataset) the combined tagger performs 20% more replace operations (as compared to add operations).", "In contrast, on the CAPTIONS dataset, it performs 50% more add operations.", "While the combined tagger learns to use the optimal tagging operation to some extent, a deeper understanding of this phenomenon is an interesting future topic for research.", "We conclude that the choice of the tagger variant is dependent on the characterstics of the underlying transfer task.", "We introduce the task of politeness transfer for which we provide a dataset comprised of sentences curated from email exchanges present in the Enron corpus.", "We extend prior works (Li et al., 2018; Sudhakar et al., 2019) on attribute transfer by introducing a simple pipeline tag & generate which is an interpretable two-staged approach for content preserving style transfer.", "We believe our approach is the first to be robust in cases when the source is style neutral, like the non-polite class in the case of politeness transfer.", "Automatic and human evaluation shows that our approach outperforms other state-of-the-art models on content preservation metrics while retaining (or in some cases improving) the transfer accuracies.", "This material is based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200.", "The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon.", "The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government.", "This work was also supported in part by ONR Grant N000141812861, NSF IIS1763562, and Apple.", "We would also like to acknowledge NVIDIA's GPU support.", "We would like to thank Antonis Anasta-sopoulos, Ritam Dutt, Sopan Khosla, and, Xinyi Wang for the helpful discussions." ]
[ "objective", "objective", "method", "result", "result", "other", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "method", "method", "result", "result", "result", "objective", "other", "abstain", "objective", "result", "method", "other", "other", "other", "method", "other", "method", "other", "other", "method", "other", "other", "other", "objective", "abstain", "other", "other", "objective", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "abstain", "abstain", "method", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "objective", "result", "other", "other", "other", "other", "other", "other" ]
[ "Understanding voluminous historical records provides clues on the past in various aspects, such as social and political issues and even natural science facts.", "However, it is generally difficult to fully utilize the historical records, since most of the documents are not written in a modern language and part of the contents are damaged over time.", "As a result, restoring the damaged or unrecognizable parts as well as translating the records into modern languages are crucial tasks.", "In response, we present a multi-task learning approach to restore and translate historical documents based on a self-attention mechanism, specifically utilizing two Korean historical records, ones of the most voluminous historical records in the world.", "Experimental results show that our approach significantly improves the accuracy of the translation task than baselines without multi-task learning.", "In addition, we present an in-depth exploratory analysis on our translated results via topic modeling, uncovering several significant historical events.", "Historical records are invaluable sources of information on the lifestyle and scientific records of our ancestors.", "Humankind has learned how to handle social and political problems by learning from the past.", "The historical records also serve as the evidence of intellectual accomplishment of humanity over time.", "Given such importance, there has been a great deal of nationwide efforts to preserve these historical records.", "For instance, UNESCO protects world heritage sites, and experts from all around the world have been converting and restoring historical records in a digital form for long-term preservation.", "A representative example is the Google Books Library Project 1 .", "However, despite the importance of the historical records, it has been challenging to properly utilize the records for the following reasons.", "First, the nontrivial amounts of the documents are partially damaged and unrecognizable due to unfortunate historical events or environments, such as wars and disasters, as well as the weak durability of paper documents.", "These factors result in difficul-ties to translate and understand the records.", "Second, as most of the records are written in ancient and outdated languages, non-experts are difficult to read and understand them.", "Thus, for their in-depth analysis, it is crucial to recover the damaged parts and properly translate them into modern languages.", "To address these issues existing in historical records, we formulate them as the task of language modeling, especially for the recovery and neural machine translation, by leveraging the advanced neural networks.", "Moreover, we apply topic modeling to the translated historical records to efficiently discover the important historical events over the last hundreds of years.", "In particular, we utilize two representative Korean historical records: the Annals of the Joseon Dynasty and the Diaries of the Royal Secretariat (hereafter we refer to them as AJD and DRS respectively).", "These records, which contain 50 million and 243 million characters respectively, are recognized as the largest historical records in the world.", "Considering their high value, UNESCO recognized them as the Memory of the 1 https://support.google.com/websearch/answer/9690276 Large-scale ancient documents , , ,", "World. 2,3 These two historical corpora contain the contents of five hundred years from the fourteenth century to the early twentieth century. In detail, AJD consists of administrative affairs with national events, and DRS contains events that occurred around the kings of the Joseon Dynasty. These corpora are valuable as they contain diverse information including international relations and natural disasters. In addition, the contents of the records are objective since the writing rules are strict that political intervention, even from the kings, is not allowed by their independent institution.", "Although DRS contains a much larger amount of information than AJD, only 1020% of DRS has been translated into the modern Korean language by a few dozens of experts for the last twenty years. The complete translation of DRS is currently expected to additionally take more than 3040 years if only human experts continue to translate them. Applying the neural machine translation models into the historical records contains several issues. First, the pre-trained models for Chinese are not suitable to DRS and AJD, mainly because of the differences between Hanja and the Chinese language. In the past, Korean historiographers borrowed the Chinese character to write the sentences spoken by Koreans. As a result, diverse characters had been moderated or created, and considerable grammatical differences exist between the Chinese language and Hanja. Furthermore, several parts of those records are damaged and require restoration as shown in Fig. 2. Therefore, these damaged parts should be restored in order to translate them correctly. In order to address these issues, we propose", "formation/memory-of-the-world/register/full-list-of-registered-heritage/registered-heritage-page-8/the-annals-of-the-choson-dynasty/ 3 http://www.unesco.org/new/en/communication-and-in formation/memory-of-the-world/register/full-list-of-register ed-heritage/registered-heritage-page-8/seungjeongwon-ilgi-the-diaries-of-the-royal-secretariat/", "a model suitable for the historical documents using the self-attention mechanism.", "Overall, we propose a novel multi-task approach to restore the damaged parts and translate the records into a modern language. Afterward, we extract the meaningful historical topics from the world's largest historical records as shown in Fig. 1.", "This study makes the following contributions: We design a model based on the self-attention mechanism with multi-task learning to restore and translate the historical records.", "Results demonstrate that our methods are effective in restoring the damaged characters and translating the records into a modern language.", "We translate all the untranslated sentences in DRS.", "We believe that this dataset would be invaluable for researchers in various fields.", "4 We present a case study that extracts meaningful historical events by applying topic modeling, highlighting the importance of analysis of historical documents.", "This work broadly incorporates three different tasks: document restoration, machine translation, and document analysis.", "Therefore, this section describes studies related to the restoration of damaged documents, neural machine translation, and the analysis of historical records.", "Recently, neural machine translation (NMT) has achieved outstanding achievements.", "Based on the encoder-decoder architecture, the attention mechanism (Bahdanau et al., 2015) significantly improves the performance of NMT, by calculating the target context vector in the current time step via dynamically combining the encoding vectors of source 4 The codes, trained model, and datasets are accessible via https://github.com/Kyeongpil/deep-joseon-record-analysis.", "words.", "The self-attention-based networks (Vaswani et al., 2017) consider the correlations among all word pairs in the source and target sentences.", "Based on the success of self-attention networks, Transformer architecture for language modeling has been proposed, showing the forefront performances (De-vlin et al., 2019; Radford et al., 2019).", "Especially, the pre-training approaches further improve the performances, since they train the model robustly with several tasks using a large document corpus.", "In addition, lightweight models, such as ALBERT (Lan et al., 2019), are proposed to reduce the model size while preserving the model performance.", "However, as most of the recent approaches focus on pre-training with documents written in a modern language, the model for historical datasets does not exist.", "Therefore, we adopt a lightweight model in the same manner as ALBERT to efficiently reconstruct and translate millions of documents.", "Regarding the translation task for the historical documents, several studies attempt to translate the ancient Chinese documents into modern Chinese language (Zhang et al., 2019b; Liu et al., 2019a).", "However, as they mainly attempt to translate ar-chaic characters into the modern language using paired corpus, they do not fully utilize the unpaired corpus.", "Therefore, we improve the performance of machine translation for historical corpora with multi-task learning with the translation and restoration tasks, which fully utilize the paired and unpaired corpora.", "Unfortunately, lots of characters in the historical records are damaged or misspelled.", "As shown in Fig. 2, the damaged parts are prevalent in DRS, which significantly degrade the quality of subsequent translation tasks.", "To address this problem, several studies focus on normalizing the misspelled words (Tang et al., 2018; Domingo and Nolla, 2018), and others further apply language modeling to restore the parts of the documents via deep neural networks (DNNs) (Caner and Haritaoglu, 2010; Assael et al., 2019).", "Recently, the Cloze-style approach of machine reading comprehension (masked language modeling; MLM) predicts the original tokens for those positions where the words in the original sentence are randomly chosen and masked or replaced (Her-mann et al., 2015).", "Several studies significantly improved the model performance by pre-training the model via the Cloze-style approach.", "By utilizing the MLM approach with the self-attention mechanism and the large-scale training dataset, numerous models improve the performances of various downstream tasks including NMT task (Baevski et al., 2019; Devlin et al., 2019; Zhang et al., 2019a; Conneau and Lample, 2019; Liu et al., 2019c; Clark et al., 2019).", "However, to our knowledge, few studies apply such an MLM approach to restore the damaged parts.", "Motivated by these studies, we design our model using masked language modeling based on the self-attention architecture to recover the damaged documents considering their contexts.", "Various studies apply the machine learning approaches to analyze the historical records (Zhao et al., 2014; Kumar et al., 2014; Mimno, 2012; Kim et al., 2015; Bak and Oh, 2015, 2018).", "In addition, researchers adopt neural networks such as convolutional neural networks and autoencoders, for page segmentation and optical character recognition to convert the historical records in a digital form (Chen et al., 2017; Clanuwat et al., 2019).", "Given such digital-form records, analysts attempt to utilize the topic modeling to discover the historically meaningful events (Yang et al., 2011).", "Especially, using the translated AJD, researchers discover historical events such as magnetic storm activities (Yoo et al., 2015; Hayakawa et al., 2017), meteors (Lee et al., 2009), and solar activities (Jeon et al., 2018).", "In political science, researchers analyze the decision patterns of a royal family in the Joseon Dynasty (Bak and Oh, 2015, 2018).", "Besides, the dietary patterns and dynamic social relations among key figures during the Joseon Dynasty have Multi-Head Attention FeedForward Add & Norm Add & Norm SharedEncoder Linear Multi-Head Attention Feed Forward Add & Norm Add & Norm RestorationEncoder Hanja Input Embedding Multi-Head Attention FeedForward Add & Norm Multi-Head Attention Add & Norm Add & Norm TranslationDecoder Korean Input Embedding Linear Figure 3: Overview of the proposed model for the restoration and translation tasks.", "been investigated (Ki et al., 2018).", "However, existing studies mainly rely on the documents translated by human experts.", "Therefore, we first translate the documents in AJD and DRS.", "Afterward, we apply topic modeling approaches to mine meaningful historical events over large-scale data.", "This section describes a multi-task learning approach based on the Transformer networks to effectively restore and translate the historical records.", "The overview of our model is shown in Fig. 3.", "AJD and DRS datasets consist of Hanja sentences H = { h 1 , . . . , h N } and Korean sentences K = { k 1 , . . . , k N } , where each Korean sentence is translated from its corresponding Hanja sentence.", "Here, the Hanja represents the Chinese characters borrowed to write the Korean language in the past.", "Especially, DRS contains additional Hanja sentences (cid:101) H = { h N +1 , . . . , h M } that are not translated yet.", "Hence, we have in total M Hanja sentences in the Hanja corpus such that H = H (cid:101) H and N Korean sentences in the Korean corpus K .", "Considering the properties of AJD and DRS, we design a multi-task learning approach with document restoration and machine translation, based on the Transformer networks.", "As shown in Fig. 3, our model consists of embedding and output layers for Hanja and Korean, and three Transformer modules: the shared encoder, the restoration encoder, and the translation decoder.", "The restoration encoder is an encoder for the restoration task.", "The translation decoder is used for translating Hanja sentences into modern Korean sentences, and the shared encoder is used for both the restoration and translation tasks.", "By sharing the encoder module for both tasks, the shared encoder is trained with a large-scale corpus, i.e., the Hanja-Korean paired dataset and the additional unpaired Hanja dataset.", "The parameter sharing technique assists the model to learn rich information from the Hanja corpus.", "We apply the cross-layer parameter-sharing technique in the same manner as used in ALBERT (Lan et al., 2019), which shares the attention parameters for each Transformer encoder and decoder modules to reduce the model size and the inference time.", "The restoration task for damaged documents is similar to the MLM approach, which masks randomly chosen tokens in the input sentence and then predicts their original tokens in the corresponding position.", "We apply the MLM technique to restore the damaged documents, especially in the case of the Hanja sentences H .", "For word indices ( w h i 1 , . . . , w h i L i ) in the Hanja sentence h i , where L i is the length of the i -th sequence, several words are randomly selected and replaced by a [MASK] token.", "We extract word embedding vectors ( e h i 1 , . . . , e h i L i ) R d emb from the Hanja embedding layer combined with positional embedding vectors, where d emb represents the dimension size of the embedding space.", "Here, we apply the factorized embedding parameterization technique to reduce model parameters (Lan et al., 2019).", "These embedding vectors are projected onto the d model -dimensional embedding space through a linear layer.", "Subsequently, the embedding vectors are transformed into the Hanja context vectors ( s h i 1 , . . . , s h i L i ) via the shared encoder and the restoration encoder as s h i 1 , . . . , s h i L i = f S ( e h i 1 , . . . , e h i L i ) , (1) s h i 1 , . . . , s h i L i = f R ( s h i 1 , . . . , s h i L i ) , (2) where f S and f R functions represent the shared encoder and the restoration encoder, respectively.", "The Hanja context vectors is non-linearly transformed into the output vector z h i k R d emb via the output layer.", "We also apply the factorized embedding parameterization technique to the output layers for parameter reduction.", "We calculate the probability P ( w h i k,m | w h i 1 , . . . , w h i L i ) for the index m of the original token w h i k , using the softmax function as P ( w h i k,m | w h i 1 , . . . , w h i L i ) = exp( W hm (cid:62) z h i k ) (cid:80) | V h | j exp( W hj (cid:62) z h i k ) , (3) where | V h | is the size of the Hanja vocabulary.", "In order to facilitate the training of our translation module, we exploit the Hanja-Korean paired dataset { ( h i , k i ) | h i H , k i K} .", "As shown in Fig. 3, we first extract the Hanja context vectors ( s h i 1 , . . . , s h i L i ) from the word tokens in the Hanja sentence h i , using the shared encoder in the same manner as in Eq.", "1.", "Utilizing the Hanja context vectors and previously predicted Korean words ( w k i 1 , . . . , w k i t 1 ) , we subsequently calculate the d model -dimensional Korean context vector s k i t for the current time step t as s k i t = f D ( s h i 1 , . . . , s h i L i , w k i 1 , . . . , w k i t 1 ) , (4) where f D represents the translation decoder layers.", "After calculating the Korean context vector s k i t , we non-linearly transform the context vector to the output vector z k i t R d emb , through the output layer, along with the above-mentioned factorized embedding parameterization for parameter reduction.", "Finally, we yield the probability that the word V m is generated from the t -th step as P ( w k i t,m | h i , w k i 1: t 1 ) = exp( W km (cid:62) z k i t ) (cid:80) | V k | j exp( W kj (cid:62) z k i t ) , (5) where | V k | is the size of the vocabulary for the Korean corpus, and W k R | V k | d emb is the output layer for the Korean corpus.", "As previously mentioned, we employ the parameter sharing approach for the encoder module, (i.e., the shared encoder), thus enhancing the robustness of our model, especially with the Hanja dataset.", "In order to train our model, we use the cross-entropy loss to maximize the probability of the original token indices for the masked tokens and the target sentence for the translation task as", "L rst = 1 M (cid:80) h i HE k ( h i ) (cid:2) log P ( w h i k | h i ) (cid:3) , (6) L trs = 1 N (cid:80) Ni =1 (cid:2) 1 | k i | (cid:80) | k i | t =1 P ( w k i t | h i , w k i 1: t 1 ) (cid:3) , (7)", "where ( ) is an operator that randomly selects the tokens from each sentence for MLM.", "In this study, we apply not only unigram masking but also the n-gram masking techniques (i.e., bigrams and tri-grams), as previously applied (Zhang et al., 2019a).", "Finally, the total loss is defined as L = L rst + L trs .", "Our model is optimized by using the rectified Adam (Liu et al., 2019b) with the layer-wise adaptive rate scheduling technique (You et al., 2017).", "We also apply the gradient accumulation technique and update our model for each loss asynchronously, to increase the batch size and efficiently manage the GPU memory.", "After training the model, the damaged tokens are replaced by the [MASK] token during the restoration stage, and the model obtains the top-K characters with the highest probabilities, among which users can choose and confirm a correct characters in the position of the damaged parts.", "In addition, we translate all the Hanja records that are not yet translated for further in-depth analysis.", "When translating the Hanja sentence, we additionally apply beam search with length normalization.", "The translation task for all the untranslated records using 20 V100 GPUs had a duration of approximately five days.", "This section first describes our datasets and experimental settings.", "To train our model, we collect most of the documents of AJD and DRS, including those manually translated to date, provided by the National Institute of the Korean History 5 .", "The records contain approximately 250K documents for AJD and 1.4M documents for DRS.", "After collecting documents, we tokenize each Hanja sentence into the character-level tokens, similar to previous studies (Zhang et al., 2014; Li et al., 2018), and also tokenize each Korean sentence based on the unigram language model (Kudo, 2018) provided by Google's SentencePiece library.", "6 Here, we included those words appearing more than ten times in the Hanja vocabulary, the size of which 5 http://www.history.go.kr/ 6 https://github.com/google/sentencepiece is about 8.7K words.", "For the Korean corpus, we limit the size of the Korean vocabulary to 24K.", "The out-of-vocabulary words are replaced with UNK (unknown) tokens.", "To improve the stability and effi-ciency during the training stage, we filter out those Hanja sentences with less than four tokens or more than 350 tokens and those Korean sentences with less than four tokens or more than 300 tokens.", "Note that the portion of sentences filtered out from each dataset is less than 10%.", "To evaluate the performance of our model, we randomly select 20K sentences as a test dataset for each of the paired and the unpaired sets.", "The sizes of the training set for the Hanja-Korean paired corpus and the unpaired Hanja corpus are 240K and 1.38M, respectively.", "The statistics of the dataset are summarized in Table 1.", "We set hyper-parameters similarly to the BERT (De-vlin et al., 2019) base model.", "We set the size of the embedding dimension d emb , the hidden vector dimension d model , and the dimension of the position-wise feed-forward layers as 256, 768, and 3,072, respectively.", "The shared encoder, the translation decoder, and the restoration encoder consist of 12, 12, and 6 layers, respectively.", "We use 12 attention heads for each multi-head attention layer.", "Overall, the total number of parameters is around 168.8M.", "After obtaining machine-translated outputs of the remaining records, we apply topic modeling to the full set of documents for exploratory analysis of historical events.", "To be specific, the full set of documents include all of the manually translated records as well as machine-translated records by our model.", "By using each translated record k i and its written date information d i , we first parse the document into morphemes and then use the only noun and adjective tokens.", "Afterward, we build the term-date matrix V RV D where V is the vocabulary size and D is the number of dates in the total set of historical documents.", "In this study, we utilize non-negative matrix factorization (NMF) (Lee and Seung, 2001) as a topic modeling method 7 .", "We first assume that there ex-7 Topic modeling includes several methods such as latent Dirichlet allocation (LDA) (Blei et al., 2003)-based and nonnegative matrix factorization-based models (Lee and Seung, 2001).", "We additionally tested topic modeling with LDA, but ist K topics in the corpus.", "The term-date matrix V is decomposed into the term-topic weight matrix W RV K and the date-topic weight matrix H RD K as W , H = arg min W , H 0 (cid:107) V WH (cid:62) (cid:107) 2 F + ( W , H ) , (9) where (cid:107)(cid:107) F represents the Frobenius norm, and and represent the L 1 regularization function and the regularization weight, respectively.", "We set the number of topics K as 20 8 and the regularization weight as 0.1.", "This section describes the results of the performances of our model for restoration and translation, followed by qualitative examples of each task as well as topic modeling results.", "We evaluate the performance of our model on the document restoration task on the test dataset.", "We also compare performance between the model trained with and without multi-task learning.", "Table 3 shows the results of top-K (HITS@K).", "The top-10 accuracy of our proposed model is almost 89%, which indicates the high performance of our model and demonstrates that our model provides analysts with appropriate options.", "However, the baseline model, trained without multi-task learning, performs slightly better than the one with multitask learning.", "This shows that the baseline model is more specialized in the document restoration task.", "However, although our model performance is slightly lower than the baseline model, the benefits of the multi-task learning approach are significantly manifested in the NMT task as shown in Table 5.", "As our model shows the acceptable performances on both the restoration and the translation tasks, we conclude that our model learns the purpose of our research well via multi-task learning.", "We will further discuss the main benefits of multi-task learning in Section 5.2.", "We further investigate the qualitative results of the document restoration task.", "Table 2 shows four randomly sampled, example pairs.", "As shown in the first three rows of this table, the model also has the ability to predict bi-gram and tri-gram character-level tokens because the model is trained using the results of NMF are slightly better than those of LDA.", "n-gram-based MLM.", "Furthermore, although each character is not exactly the same as the original one, the last example in the table shows that our model restores the proper format of the name part.", "However, predicting the exact name is a difficult task for human experts, even when considering the context of the sentence, as prior knowledge is necessary to predict the exact name.", "Therefore, we quantitatively measured the model performance on the proper nouns, e.g. person and location names, using 200 samples of them.", "The average top-10 accuracy is only 8.3%, significantly lower than the overall accuracy, which is larger than 89%.", "We conjecture that the degradation is mainly due to the difficulty in maintaining the information of the proper nouns, which would require external knowledge.", "We leave it as our future work.", "To investigate the performance of the machine translation task, we translate the Hanja sentences in the test dataset and then evaluate the model performance.", "As shown in Table 5, the results for the translation task are evaluated by BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), and ROUGE-L (Lin, 2004).", "In this result, Full represents our proposed model trained by multitask learning of the translation and the restoration tasks.", "Therefore, the model is trained to take both the translated and untranslated sentences.", "On the other hand, Base represents the model trained only by the translation task, and thus, the model is trained to accept only the translated sentences.", "Our model outperforms the baseline model with a significant margin.", "Furthermore, we generate sentences using the beam search method with the length normalization.", "In this study, we compare the greedy search and the beam search with a beam size of 3.", "As shown in Table 5, results obtained with a beam size of 3 are slightly better than the greedy search method.", "Finally, the BLEU score of our model is obtained as 0.5410, which indicates that our model performs reasonably well, compared to other recent models trained in other languages.", "We additionally compared our model to the model trained via the pretraining-then-finetuning approach.", "As shown in Table 6, the BLEU score of this approach is 0.3755, which is 5.9% higher than that of the model trained from scratch but 28.7% lower than our multi-task learning approach.", "The results can be explained for two reasons.", "First, as the size of unpaired data is much larger than that of paired data, the multi-task learning fully utilizes the paired and unpaired data for the translation task, compared to the pretraining-then-finetuning approach.", "Second, The pretraining-then-finetuning approach has a catastrophic forgetting problem (Chen et al., 2020).", "In other words, the finetuning step can fail to maintain the knowledge acquired at the pretraining step.", "However, as both reconstruction and translation tasks are crucial for historical documents, such a forgetting issue is critical to our tasks.", "We also tested the quality of the Hanja-Korean translation task using a Chinese-Korean machine translator.", "As few publicly available machine translation models for Chinese-Korean exist, we used Google Translate 9 instead.", "The translator failed to translate given Hanja sentences in most cases, mainly because Hanja and Chinese have different properties in terms of grammar and word meanings.", "To investigate the translation performance qualitatively, we sampled translated samples.", "shows the sentences translated from the untranslated documents by our model.", "For readability, we append English sentences corresponding to the predicted sentences in each row.", "Each result indicates that our model generates the modern sentences corresponding to contexts of the source Hanja sentences.", "Interestingly, the third example in the table is related to the astronomical observation of the aurora.", "Later, we found prior studies confirming that the red energy mentioned in our document was an aurora (Zhang, 1985; Stephenson and Willis, 2008).", "This highlights the importance of the machine translation task of the historical records, as it is essential to survey by researchers in various fields such as astrophysics and geology.", "Therefore, we further analyze the documents with the topic modeling approach.", "As described in Section 4.3, we calculate the term-topic weight matrix W and date-topic weight matrix H .", "We select three interesting topics from the total of K topics and visualize the term-topic weights in W using the word cloud and the date-topic matrix H in a smoothed time-series graph for each topic.", "Fig. 4 shows the results.", "The first topic is related to troops and military exercise.", "As shown in the red dashed box in the time-series graph, the weights dramatically decrease in 1882, while the weights continuously increase after the biggest war in 1592.", "In fact, a coup attempt of the old-fashioned soldiers occurred in 1882, causing the national intervention of neighboring countries and the decline of self-reliant defense.", "The fifteenth topic is related to war and national defense.", "Although this topic is related to the preceding military topic, it is more related to the international relationship compared to the first one.", "In the early years of the dynasty, northern enemies and pirates frequently invaded Joseon, which reveals as the large topical weights in the beginning.", "The weights increase in the late sixteenth century, and the weight maintains at a high level until 1637 when three great wars broke out in Joseon.", "The eighteenth topic is related to astronomical observations such as a halo and a meteor shower.", "In the mid-sixteenth century, people observed the Leonids, as shown in the first red box of the graph.", "We later found that experts in astronomy also discovered this in the past, using AJD (Yang et al., 2005).", "Moreover, from the mid-seventeenth century to the early eighteenth century, the number of sunspots was low.", "Solar observers name this event as the Maunder minimum (Eddy, 1976; Shindell et al., 2001).", "This event caused abnormal climate phenomena, such as the third example in Table 4, as shown in the second red box of the graph.", "This topic demonstrates the importance of the use of historical records since it is difficult to easily spot the phenomena that occurred centuries ago.", "Note that previous studies mainly attempted to exploit only AJD or translated parts of DRS.", "However, we utilize both AJD and the majority of DRS records by applying advanced NMT techniques.", "When performing topic modeling by using only those manually translated sentences, it failed to include topics such as the health of the royal families and actions against treason sinners, which were revealed by our approach.", "It is because the voluminous documents that have not been manually translated contain their own topics.", "Thereby, we extract several valuable topics even with no special knowledge in the Hanja domain.", "Translating the historical records into modern languages expands our knowledge base, and analysis of the records using machine translation and text mining techniques may help the analysts effectively explore the historical records.", "In this paper, we proposed a novel approach to translate and restore the historical records of the Joseon dynasty by formulating the multi-task learning task based on the self-attention mechanism.", "Our approach significantly increases the translation quality by learning the rich contents in large documents.", "We anticipate these tasks are the first steps towards translating the ancient Korean historical records into modern languages such as English.", "Furthermore, the model effectively predicts the original words from the damaged parts of the documents, which is an essential step for restoaring the damaged documents.", "Results from text mining approaches show that our approaches have the potential in supporting analysts in effectively exploring the large volume of historical documents.", "We also expect researchers from diverse domains can explore documents and discover historical findings such as astronomical phenomena and undiscovered international affairs, with no special domain knowledge.", "As future work, we will also leverage the transfer learning approach to translate historical documents into other languages, such as English or French.", "We also plan to apply knowledge graph-based machine learning approaches, e.g. knowledge graph embedding and graph neural networks, to discover historical events and relations.", "This work was supported by Institute for Information & communications Technology Planning", "& Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2020-0-00368, A Neural-Symbolic Model for Knowledge Acquisition and Inference Techniques, No.2019-0-00075, Artificial Intelligence Graduate School Program (KAIST), and No.2021-0-01341, Artificial Intelligence Graduate School Program (Chung-Ang University)) and the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (No.NRF-2019R1A2C4070420)." ]
[ "abstain", "abstain", "abstain", "method", "result", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "other", "other", "other", "other", "abstain", "objective", "objective", "objective", "abstain", "method", "result", "abstain", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "abstain", "other", "other", "other", "other", "other", "other", "method", "objective", "other", "other", "other", "other", "other", "other", "other", "other", "objective", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "method", "method", "abstain", "other", "abstain", "method", "method", "method", "method", "abstain", "method", "method", "method", "method", "method", "other", "abstain", "method", "abstain", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "other", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "result", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "objective", "objective", "abstain", "method", "other", "other" ]
[ "While rule-based detection of subjectverb agreement (SVA) errors is sensitive to syntactic parsing errors and irregularities and exceptions to the main rules, neural sequential labelers have a tendency to overfit their training data.", "We observe that rule-based error generation is less sensitive to syntactic parsing errors and irregularities than error detection and explore a simple, yet efficient approach to getting the best of both worlds: We train neural sequential labelers on the combination of large volumes of silver standard data, obtained through rule-based error generation, and gold standard data.", "We show that our simple protocol leads to more robust detection of SVA errors on both in-domain and out-of-domain data, as well as in the context of other errors and long-distance dependencies; and across four standard benchmarks, the induced model on average achieves a new state of the art. 1 Introduction Grammatical Error Detection.", "Grammatical Error Detection (GED, Leacock et al., 2010) is the task of detecting grammatical errors in text.", "It is used in various real-world applications, such as writing assistance tools, self-assessment frameworks and language tutoring systems, facilitating incremental and/or exploratory editing of one's writing.", "Accurate error detection systems also have potential applications for language generation and machine translation systems, guiding automatically generated output towards grammatically correct sequences.", "The problem of detecting subjectverb agreement (SVA) errors is an important subtask of GED.", "In this work, we focus on detecting subject verb agreement errors in the English as a Second Language (ESL) domain.", "Most SVA errors occur at the third-person present tense when determining whether the subject describes a singular or a plural concept.", "The following examples demonstrate subjectverb agreement errors (bold): (1)", "The task can be formulated as a sequence labeling problem, with the goal of labeling subject verb pairs as being in agreement or not.", "Approaches.", "Sequence labeling problems in NLP, including GED and the subtask of identifying SVA errors, have, in recent years, been handled with Recurrent Neural Networks (RNNs) trained on large amounts of data (Rei and Yannakoudakis, 2016, 2017).", "However, most publicly available datasets for GED are relatively small, making it difficult to learn a general grammar representation and potentially leading to over-fitting.", "Previous work has also shown that neural language models with a similar architecture have diffi-culty learning subjectverb agreement patterns in the presence of agreement attractors (Linzen et al., 2016).", "Rule-based approaches (Andersen et al., 2013) are still considered a strong alternative to end-to-end neural networks, with many industry solutions still relying on rules defined over syntactic trees.", "The rule-based approach has the advantage of not requiring manual annotation, while also allowing easy access to adding and removing individual rules.", "On the other hand, language is continuously evolving, and there are exceptions to most grammar rules we know.", "Additionally, rule-based matching typically relies on syntactic pre-processing, which is error-prone, leading to compounding errors that hurt the downstream GED performance.", "end-to-end neural models for the detection of SVA errors.", "We show that rule-based systems are vulnerable to errors in the underlying syntactic parsers, while also failing to capture irregularities and exceptions.", "In contrast, end-to-end neural architectures are limited by the available labeled examples and sensitive to the variance in these datasets.", "We then make the following observation: while rule-based error detection is severely affected by errors and irregularities in syntactic parsing, rule-based error generation is more robust.", "SVA errors can be generated without identifying subject dependency relations in advance, and changing the number of a verb almost always leads to an error.", "This generated data can be used as a silver standard for optimizing neural sequence labeling models.", "We demonstrate that a system trained on a combination of available labeled data and large volumes of silver standard data outperforms both neural and rule-based baselines by a margin on three out of four standard benchmarks, and on average achieves a new state-of-the-art on detecting SVA errors.", "Neural approaches.", "Recent neural approaches to GED include Rei and Yannakoudakis (2016) who argue that bidirectional (bi-) LSTMs, in particular, are superior to other RNNs when evaluated on standard ESL benchmarks for GED and give state-of-the-art results.", "Rei and Yannakoudakis (2017) show even better performance using a multi-task learning architecture for training bi-LSTMs that additionally predicts linguistic properties of words, such as their part of speech (PoS).", "Recent studies (Linzen et al., 2016; Gulordava et al., 2018; Kuncoro et al., 2018) have specifically analyzed the performance of LSTMs in learning syntax-sensitive dependencies such as SVA.", "Rule-based approaches.", "Cai et al. (2009) use a combination of dependency parsing and sentence simplification, as well as special handling of wh -elements, to detect SVA errors.", "Once the subjectverb relation is identified, after parsing the simplified input sentence, a PoS tagger is used to check agreement.", "This is similar in spirit to the rule-based baseline system used in our experiments below.", "Wang et al. (2015) use a similar approach, distinguishing between four different sentence types and using slightly different rules for each type.", "Their rules are, again, defined over the outputs of a dependency parser and a PoS tagger.", "Sun et al. (2007) use labeled data to derive rules based on dependency tree patterns.", "Automatic error generation.", "Because of the scarcity of annotated datasets in GED, research has been carried out on creating artificial errors, where errors are injected into otherwise correct text using deterministic rules or probabilistic approaches using linguistic information (Felice and Yuan, 2014; Kasewa et al., 2018).", "Studies focusing on detecting specific error types such as determiners and prepositions (Rozovskaya and Roth, 2011) or noun number (Brockett et al., 2006) are mainly developed within the framework of automatic error generation.", "Recent work, expanding the detection (Rei et al., 2017) and the correction (Xie et al., 2018) tasks to all types of errors, improves the performance of neural models by training on additional artificial error data generated via machine translation methods.", "Miscellaneous.", "Recent work has also led to good performance in correcting grammatical errors (Yannakoudakis et al., 2017; Bryant and Briscoe, 2018; Chollampatt and Ng, 2018).", "However, in this paper, we are interested in the task of grammatical error detection and we therefore compare our work to current state-of-the-art approaches to detecting errors and do not report the performance of correction systems.", "Following recent work on GED (Rei and Yannakoudakis, 2016), we define SVA error detection as a sequence labeling task, where each token is simply labeled as correct or incorrect.", "For a given SVA error, only the verb is labeled as incorrect.", "Error types other than SVA are ignored, i.e., we do not correct the errors in the text and we do not attempt to predict them as incorrect.", "In this paper, we only study SVA in English.", "We note that even for English, there is some controversy about what constitutes an SVA error.", "Manaster-Ramer (1987), cites this example, which has been used by some as an argument for English exhibiting cross-serial dependencies: (2) The man and the women dance and sing, respectively.", "We also note that subjectverb agreement can be more or less pervasive across languages, depending on how rich the morphology is, whether the given language exhibits pro-drop , and how far apart subjects and verbs are likely to occur.", "Typically, building a GED rule-based system is time-consuming and requires specific knowledge to deal with the multiple exceptions and irregularities of languages.", "Difficult cases (such as long distance subjectverb relations) are often ignored in order to ensure high precision, at the expense of the recall of the system.", "However, our rule-based system is not limited to the detection of simple cases of SVA errors.", "It relies on PoS tags and dependency relations to identify all types of SVA errors.", "Specifically, our rule-based system operates as follows:", "(i) it identifies the candidate verbs based on PoS tags; 1", "(ii) for a given verb, it uses the dependency relations to find its subject; 2", "(iii) the PoS tag of the verb and its subject are used to check whether they agree in number and person.", "We use predicted Penn Treebank PoS tags and dependency relations provided by the Stanford Log-linear PoS Tagger (Toutanova et al., 2003) and the Stanford Neural Network Dependency Parser (Chen and Manning, 2014) respectively.", "We use the state-of-the-art neural sequence labeling architecture for error detection (Rei and Yannakoudakis, 2016).", "The model receives a sequence of tokens ( w 1 , ..., w T ) as input and outputs a sequence of labels ( l 1 , ..., l T ) , i.e., one for each token, indicating whether a token is grammatically correct (in agreement) or not, in the given context.", "All tokens are first mapped to distributed word representations, pre-trained using word2vec (Miko-lov et al., 2013) on the Google News corpus.", "Following Lample et al. (2016), character-based representations are also built for every word using a bi-LSTM (Hochreiter and Schmidhuber, 1997) and then concatenated onto the word embedding.", "The combined embeddings are then given as input to a word-level bi-LSTM, creating representations that are conditioned on the context from 1 Present tense verbs + was and were.", "2 The subject can be direct attached with a nsubj relation or indirect, such as when the syntactic subject is a relative pronoun, e.g., who , or an expletive, e.g., there .", "both sides of the target word.", "These representations are then passed through an additional feed-forward layer, in order to combine the extracted features and map them to a more suitable space.", "A softmax output layer returns the probability distribution over the two possible labels ( correct or incorrect ) for each word.", "We also include the language modeling objective proposed by Rei (2017), which encourages the model to learn better representations via multi-tasking and predicting surrounding words in the sentence.", "Dropout (Srivas-tava et al., 2014) with probability 0 .", "5 is applied to word representations and to the output from the word-level bi-LSTM.", "The model is optimised using categorical cross-entropy with AdaDelta (Zeiler, 2012).", "As the public datasets either have their own taxonomy or they are not annotated with error types at all, we apply the error type extraction tool of Bryant, Felice, and Briscoe (2017) to automatically get error types mapped to the same taxonomy for all datasets.", "The tool automatically annotates parallel original and corrected sentences with error type information.", "When evaluated by human raters, the predicted error types were rated as good or acceptable in at least 95% of the cases.", "We use their publicly available tool 3 to automatically get error types for all public datasets mapped to the same taxonomy of 25 error types in total.", "We then set SVA errors as our target class.", "We compare the rule-based and neural approaches for the task of SVA error detection on four benchmarks in the ESL domain.", "FCE .", "The Cambridge Learner Corpus of First Certificate in English (FCE) exam scripts consists of texts produced by ESL learners taking the FCE exam, which assesses English at the upper-intermediate proficiency level (Yannakoudakis et al., 2011).", "We use the publicly available test set.", "3 https://github.com/chrisjbryant/errant", "2016 (AESW) is a collection of text extracts from published journal articles (mos-tly in physics and mathematics) along with their (sentence-aligned) corrected counterparts (Daudaravicius et al., 2016).", "We test on the combined trained, development and test set.", "4 JFLEG .", "The JHU Fluency-Extended GUG corpus (JFLEG) represents a cross-section of ungrammatical data, consisting of sentences written by ESL learners with different proficiency levels and L1s (Napoles et al., 2017).", "We evaluate our models on the public test set.", "CoNLL14 .", "The test dataset from the CoNLL 2014 shared task consists of (mostly argumentative) essays written by advanced undergraduate students from the National University of Singapore, and are annotated for grammatical errors by two native speakers of English (Ng et al., 2014).", "ESL writings.", "We use the following ESL datasets as training data: Lang8 is a parallel corpus of sentences with errors and their corrected versions created by scraping the Lang-8 website 5 , which is an open platform where language learners can write texts and native speakers of that language can provide feedback via error correction (Mizumoto et al., 2011).", "It contains 1 , 047 , 393 sentences.", "NUCLE comprises around 1 , 400 essays written by students from the National University of Singapore.", "It is annotated for error tags and corrections by professional English instructors (Dahlmeier et al., 2013).", "It contains 57 , 151 sentences.", "FCE train set .", "We use the publicly available FCE training set, containing 25 , 748 sentences.", "A subset of 5 , 000 sentences was separated and used for development experiments.", "4 Sentences containing special placeholders for mathematical equations, dates, etc. are filtered out.", "5 http://lang-8.com/ Artificial errors.", "We generate artificial subject verb agreement errors from large amounts of data.", "Specifically, we use the British National Corpus (BNC, BNC-Consortium et al., 2007), a collection of British English sentences that includes samples from different media such as newspapers, journals, letters or essays.", "Subjectverb agreement in English merely consists of inflecting 3rd person singular verbs in the present tense (and be in the past), which makes any text in English fairly easy to corrupt with SVA errors.", "We assume that the BNC data is written in correct British English.", "Using predicted PoS tags provided by the Stanford Log-linear PoS Tagger, we identify verbs in present tense, as well as was and were for the past tense, and flip them to their respective opposite version using the list of inflected English words (annotated with morphological features) from the Unimorph project (Kirov et al., 2016).", "The final artificial training set includes the sentences with injected errors ( 265 , 742 sentences), their original counterpart, and sentences where SVA errors could not be injected due to not containing candidate verbs that could be flipped ( 241 , 295 sentences).", "The models.", "We compare our neural model trained on both artificially generated errors and ESL data (LSTMESL + art ) to three baselines: a neural model trained only on ESL data (LSTMESL ) (i.e., reflecting the performance of current state-of-the-art approaches for GED), a language model based method (BERT-LM) and our rule-based system.", "In order to measure the real performance of a language model (LM) on the detection of SVA errors, we choose to use the BERT system (Devlin et al., 2018) to assign probabilities to different versions of the test sentences.", "Specifically, we use the pre-trained uncased BERT-Base model.", "We duplicate the sentences each time a corruptible verb occurs (flipping its number).", "The LM assigns a probability to both possible versions of the verbs.", "We select the version which has the highest probability, if this probability is at least 0 .", "1 6 higher than the probability of the verb in the original sentence.", "6 We tune the threshold on the test dataset from the CoNLL 2013 shared task on Grammatical Error Correction of ESL learner essays.", "Hyper-parameters.", "We tune the model hyper-parameters on the FCE development set, according to the F 0 .", "5 score.", "Training is stopped when F 0 .", "5 on the FCE development set does not improve over 7 epochs.", "Word representations have size 300 , while character representations have size 100 .", "The word-level LSTM hidden layers have size 300 for each direction, and the character-level LSTM hidden layers have size 100 for each direction.", "Evaluation.", "Existing approaches are typically optimised for high precision at the cost of recall, as a system's utility depends strongly on the ratio of true to false positives, which has been found to be more important in terms of learning effect.", "A high number of false positives would mean that the system often flags correct language as incorrect, and may therefore end up doing more harm than good (Nagata and Nakatani, 2010).", "Because of this, F 0 .", "5 is preferred to F 1 in the GED domain as it puts more weight on precision than recall.", "For each experiment, we report the token-level precision (P), the recall (R), and the F 0 .", "5 scores.", "The main results are summarized in Table 1. Looking at the performance of the LSTMESL + art system, we see that on 3 out of 4 benchmarks, our neural model trained on artificially generated errors outperforms the LSTMESL system with respect to F 0 .", "5 .", "On average, over the four benchmarks, its F 0 .", "5 score is 2 .", "43 points higher than the best performing baseline.", "Both neural models obtain higher F 0 .", "5 scores than the rule-based baseline, on average and across the board, i.e., +10 .", "6 for LSTMESL and +15 .", "7 for LSTMESL + Art .", "The BERT-LM outperforms the LSTMESL (mostly due to its higher recall, i.e., +18 . 66 ) but still does not reach the F 0 .", "5 score of the LSTMESL + Art system which gets higher precision and recall overall ( +2 . 62 and +1 . 51 respectively).", "Furthermore, we observe a trend that the two LSTM systems trade off precision and recall, with the LSTMESL system yielding the highest precision across most datasets, but also yielding significantly lower recall than LSTMESL + Art .", "It is also evident that the performance varies over domains: all models struggle with AESW.", "This is likely due to the complexity of the scientific writing genre where, for example, sentences contain parentheses interposed between a verb and its subject.", "We also note errors are far less frequent in this genre, leading to moderate recall and very low precision.", "For the rest of the datasets, system performance is generally better.", "We analyze the effect of adding artificial errors to the training data.", "In particular, we focus on the robustness of our models by looking at how sensitive they are to grammatical errors in the surrounding context; and by looking at how good the models are at predicting agreement relative to the distance between the subject and verb.", "This set of experiments is similar in spirit to Linzen et al. (2016).", "We also analyze our rule-based baseline: so far, we know our rule-based baseline was sensitive to parser errors and irregularities.", "We inspect the quality of the underlying parser by evaluating it on data that resembles the data used in our experiments, to see whether errors seem to result more from parser errors or irregularities.", "Finally, we also look at the sensitivity of our systems to other linguistic phenomena such as relative clauses or conjunctions.", "In ESL writings, multiple errors can occur in the same sentence.", "This means more variable contexts, which can lead to degradation in the performance of both syntactic parsers / rule-based systems and GED models.", "Testing on noisy contexts We first evaluate how our systems are impacted by additional non-SVA errors in the surrounding context of SVA errors in our test data.", "For each of the test datasets, we create multiple versions, allowing for n non-SVA errors per sentence (we correct the extra non-SVA errors).", "This way we can create datasets with different levels of complexity with respect to the grammatical errors within them.", "In Figure 1, the F 0 .", "5 scores of the models are shown for different numbers of grammatical errors per sentence.", "It is evident that all of the models are negatively affected by the presence of other errors in the same sentence.", "Using more data for training i.e., our artificial training data which does not include context errors generally boosts performance on data with and without grammatical errors in the context.", "In other words, training with additional artificially generated errors seems, overall, to be making our model more robust.", "We also note that our rule-based baseline is affected by errors to roughly the same extent as our baseline neural model is.", "One might have thought the rule-based baseline would suffer more, because of it being sensitive to errors in the underlying syntactic parser.", "We return to this issue below.", "Training on non-noisy contexts In order to assess the benefit of training on non-erroneous contexts, we create a new dataset from our ESL training data (see 5.3).", "Based on the annotations in the data, we apply the corrections of error types other than SVA, thereby only leaving SVA errors in the data.", "We experiment with how adding this clean' dataset to the training set of our existing systems affects performance.", "The resulting F 0 .", "5 scores are listed in Table 2. Using clean' sentences in addition to our original ESL data for training always positively affects performance.", "In this regard, as experimented in (Rei and Yannakoudakis, 2016), training on more data in the same domain is a valid solution for improving the performance of LSTM models.", "However, when also adding artificially generated data to the training set, we reach higher scores only on 2 out of the 4 benchmarks.", "It greatly improves the average recall (+11.03), without hurting the precision on FCE and CoNLL14 but affects negatively the precision on AESW and JFLEG.", "Next, we want to study how well our models perform when the subjects and verbs are far apart, i.e., when the agreement relation is defined over a long-distance dependency.", "In order to see how our systems are affected by the distance between the subject and verb, we split the test sets based on different subjectverb distances.", "Note, however, that our benchmarks are not annotated with PoS tags and dependency relations.", "If we binned our test data based on predicted dependencies, the inductive bias of our syntactic parser and the errors it made would bias our evaluation.", "Instead, we perform our analyses on section 22 and 23 of the Penn Treebank (PTB) dataset (Mar-cus et al., 1993).", "The PTB however is not annotated with grammatical errors.", "We therefore corrupt the sentences by injecting SVA errors, in the same way we corrupted the BNC (5.3) to create additional training data.", "For each sentence in the PTB, we identify a subjectverb pair, and group the sentences by the subjectverb distance.", "We then run our models on two versions of each sentence: an unaltered version and a corrupted one, where we have generated an SVA error by corrupting the verb, using the method described earlier (5.3).", "This way we can compute the performance of our models as F 0 .", "5 scores over this dataset.", "The results are displayed in Figure 2. We can see that the LSTM trained with artificial data performs significantly better on long-distance subjectverb pairs than the LSTM trained only on ESL data.", "This suggests that training on artificially generated errors also makes our models more robust to this potential source of error.", "Note that, in general, there is a substantial gap between the performance of the two LSTM models.", "This is because one is trained on artificial data similar to the data we use in our analysis.", "However, the conclusions are based on the relative differences in performance over long-distance dependencies, and these differences should still be comparable across the two models.", "There are two obvious potential sources of error for our rule-based baseline: sensitivity to errors in the underlying syntactic parsers, and sensitivity to the irregularities of language, e.g., when collective nouns or named entities are subjects, subject verb agreement cannot always be determined by the PoS tags.", "We show that the main source of error seems to be irregularities by showing that the underlying syntactic parsers perform relatively well, even in the ESL domain.", "Table 3 lists the parsing and tagging performance of our underlying syntactic parsers across three domains: learner data (ESL) and web data (EWT) from the Universal Dependencies (UD) project (Nivre et al., 2017), as well as the newswire data it was trained on (PTB).", "We only evaluate subjectverb relations, since these are the only ones of interest in this paper.", "We see that while there is a noticeable out-of-domain drop going from newswire to learner language or web data, the parser is still able to detect subjectverb relations with high precision and recall.", "This suggests that the vulnerability of our rule-based baseline is primarily a result of linguistic irregularities and exceptions to the implemented rules.", "Finally, manually reviewing the errors made by the rule-based system, we identified frequent linguistic sources of errors, including relative clauses, conjunctions, ambiguous PoS tags, and collective nouns.", "We therefore analyze how the LSTMs and the rule-based system are globally sensitive to these potential sources of error.", "Since our benchmarks are not annotated with PoS and dependency relations, we again use the corrupted PTB sentences (see 8.2).", "Many of the examples in which our rule-based baseline fails include relative clauses (when the verb is the root of a relative clause) and conjunctions (when the subject is a conjunction).", "A second major cause of failure is ambiguous verbs, i.e., verb forms that can also be nouns ( ambiguous PoS , e.g., need, stop, point, etc.), and subjects which are singular nouns describing groups of people or things ( collective nouns , e.g., team, family, staff, etc.).", "The following examples illustrate these cases (underlined): (3)", "b. If there is someone who doesn't agree with me, he or she [. . . ] ( relative clause )", "c. It is said that the majority of the citizens has got a car [. . . ] ( collective noun )", "d. [. . . ] and police officer walk around the building as well.", "( ambiguous PoS )", "We evaluate our models on the PTB data and report the error rate (the lower the better) on present tense verbs (Figure 3).", "Overall, results show that all models are negatively affected when they encounter complex syntactic structures and ambiguous cases.", "Figure 3 also confirms that the rule-based baseline is the most sensitive one to complex structures.", "Especially in comparison with the LSTMESL + art model, the rule-based system achieves good scores on verbs which are not part of complex structures, but performs significantly worse on difficult cases.", "The LSTMESL model is the worst across almost all cases, while the LSTMESL + art shows significant improvements over the baselines, in particular for the difficult cases.", "In this paper, we argue for artificial error generation as an effective approach to learning more robust neural models for subjectverb agreement detection.", "We demonstrate that error generation is much less sensitive to parsing errors and irregularities than rule-based systems for detecting subjectverb agreement.", "On the other hand, artificial error generation enables us to utilise much more training data, and therefore can develop more robust neural models for SVA error detection that do not overfit the available, manually annotated training data.", "Our simple approach to detecting subjectverb agreements achieves a new state of the art on three out of four available benchmarks, and, on average, is better than previous approaches on the task.", "We show that, in particular, models trained on large volumes of artificially generated errors become more robust to other errors in the surrounding context of SVA, long-distance dependencies, and other challenging linguistic phenomena.", "This project was supported by Siteimprove and the Innovation Fund of Denmark through an industrial PhD grant.", "Marek Rei and Helen Yannakoudakis were supported by Cambridge Assessment, University of Cambridge." ]
[ "abstain", "objective", "objective", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "objective", "other", "other", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "other", "other", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "other", "abstain", "method", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "other", "method", "method", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "method", "abstain", "method", "abstain", "abstain", "method", "method", "abstain", "method", "method", "method", "method", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "result", "other", "other" ]
[ "While natural language understanding (NLU) is advancing rapidly, today's technology differs from human-like language understanding in fundamental ways, notably in its inferior efficiency, interpretability, and generalization.", "This work proposes an approach to representation and learning based on the tenets of embodied cognitive linguistics (ECL).", "According to ECL, natural language is inherently executable (like programming languages), driven by mental simulation and metaphoric mappings over hierarchical compositions of structures and schemata learned through embodied interaction.", "This position paper argues that the use of grounding by metaphoric inference and simulation will greatly benefit NLU systems, and proposes a system architecture along with a roadmap towards realizing this vision.", "While current NLU systems speak human language by learning strong statistical models, they do not possess anything like the rich mental representations that people utilize for language understanding.", "Indeed, despite the tremendous progress in NLU, recent work shows that today's state-of-the-art (SOTA) systems differ from human-like language understanding in crucial ways, in particular in their generalization, grounding, reasoning, and explainability capabilities (Glockner et al., 2018; McCoy et al., 2019a,b; Nie et al., 2019; Yogatama et al., 2019; Lake et al., 2019).", "Question-answering (QA) is currently one of the predominant methods of training deep-learning models for general, open-domain language understanding (Gardner et al., 2019b).", "While QA is a versatile, broadly-applicable framework, recent studies have shown it to be fraught with pitfalls (Gard-ner et al., 2019a; Mudrakarta et al., 2018).", "A recent workshop on QA for reading comprehension suggested that There is growing realization that the traditional supervised learning paradigm is broken [...] we're fitting artifacts (Gardner, 2019).", "In many respects, the problems of NLU mirror those of artificial intelligence (AI) research in general.", "Lake et", "al.'s (2017a) seminal work identified a significant common factor at the root of problems in general AI.", "The current deep-learning paradigm is a statistical pattern-recognition approach predominantly applied to relatively narrow task-specific prediction .", "In contrast, human cognition supports a wide range of inferences (planning, action, explaining, etc.), hinting at a view of intelligence focused on model-building , specifically, mental models: rich, structured, manipulable, and explainable representations useful for performing in dynamic, uncertain environments.", "This distinction motivates the quest for a new cognitively-inspired model-building learning paradigm for general AI, which has inspired fruitful subsequent research and discussion (e.g., Lake et al. (2017b)).", "The observation that NLU and general AI share a common central problem (task-specific prediction-based learning), and the growing realization that deeper text understanding requires building mental models (Gardner et al., 2019a; Forbes et al., 2019), motivate the search for an NLU analog of the cognitively-inspired model building paradigm.", "Amid recent position papers highlighting significant differences between human language understanding and current NLU systems (McClelland et al., 2019; Bisk et al., 2020), here we take a more focused look at mental models; challenges arising due to their embodied nature, their importance in general NLU, and how we might begin integrating them into current approaches.", "Mainstream NLU work, be it entirely distributional, such as BERT (Devlin et al., 2019), or also involving symbolic knowledge representation (Liu et al., 2019a; Bosselut et al., 2019), seldom addresses mental models directly.", "Crucially, such approaches lack the interactive worlds within which mental models 1 are learned jointly through language and embodied action.", "The most closely related lines of work to the present proposal are grounded approaches, which feature worlds in the form of interactive environments, and address mapping text to programs (executable semantic parses) (e.g., Gauthier and Mordatch, 2016; Liang, 2016; Kiela et al., 2016; Chevalier-Boisvert et al., 2019).", "However, while well-aligned with a model-building paradigm, typically such approaches have been limited to short or synthetic literal language and narrow domains assuming predefined environments.", "Embodied approaches to general NLU, as advocated here, are few and far between.", "Mostly, examples fall under the construction grammar framework (Steels and de Beule, 2006; Bergen and Chang, 2005).", "However, despite their intellectual merit, they were not operationalized to scale readily for mainstream applications (see 3).", "This position paper argues that executable semantic parsing and grounded approaches to NLU constitute a first step in a much larger program, whose outline is set forth, for general language understanding through embodied cognitive linguistics (ECL) .", "Following much cognitive science research (see 3, 4), this paper posits that (1) execution or simulation is a central part of semantics, essential for addressing some of the persistent diffi-culties in text understanding, and (2) metaphoric inference capabilities are central to knowledge representation, and facilitate grounded understanding of general language.", "Importantly, capacities for both simulation and metaphor are emergent, borne of embodied interaction within an external world.", "Our contributions are: we analyze inherent limitations of SOTA statistical language models applied to NLU and propose a framework to address these limitations.", "The novelty of this approach stems from bringing together ideas from the cognitive science literature, the wider AI community, and NLU.", "This framework constitutes a path to generalize current execution-based methods towards more general language understanding.", "This paper proposes a system architecture and a roadmap towards implementing the vision outlined here, suggesting preliminary directions for future work (learned world models, incorporating interaction into datasets).", "We believe that this framework will facilitate consolidation with multiple related lines of research across the different communities, particularly embodied AI and NLU (Luketina et al., 2019).", "This section presents concrete example problems demonstrating inherent limitations in SOTA NLU.", "Fig. 1 includes a short story about a world with crates, boxes, and objects inside them.", "It is a short and simple narrative, far from capturing the full-blown complexity of natural language.", "Following Gardner et al. (2019a), we assume that a system understands the story if it can correctly answer arbitrary questions about it.", "To do so requires basic commonsense and mathematical reasoning, referent grounding, tracking events, handling declarative knowledge, and more.", "The task is similar to narrative comprehension tasks in datasets such as bAbI (Bordes et al., 2015) and SCONE (Long et al., 2016), and could be solved given large amounts of annotated training data.", "But, the goal here is different, specifically, to develop models that, like humans, can understand such language on-the-fly (like zero-shot learning).", "QA approaches.", "Current QA systems, used in an off-the-shelf manner, do not generalize well to tasks on which they have not been trained; NLU models are known to be brittle even to slight changes in style and vocabulary (Gardner et al., 2020; Keysers et al., 2020).", "The closest QA setting is the DROP challenge (Dua et al., 2019), requiring reading comprehension and basic numerical reasoning over paragraphs.", "As a simple sanity check, we tested a near-SOTA model and baseline 2 on this example, asking questions about the initial and final state.", "The models were notably better answering questions about the initial state than about the final state.", "This result is perhaps expected, as the answers to questions about the initial state are closer to the input text.", "Answering questions about later states is more challenging.", "A key missing component of these systems is the ability to simulate the effects of actions, especially commonsense effects (e.g., moving a container moves the elements in it).", "Executable semantic parsing approaches.", "The problem of Fig. 1 could also naturally be cast as an executable semantic parsing (ex. SP) task.", "Similar tasks already exist, for example, the Alchemy sub-task of the SCONE dataset features beakers of chemicals that are mixed, poured, and drained.", "Executable approaches can leverage simulation to learn structured world models, but are limited by hard-coded, domain-specific executors; adding tasks requires substantial manual effort.", "For humans, through largely subconscious metaphorical inference (related to transfer and meta-learning in general AI (Lake et al., 2017a)), it is obvious that both SCONE and Fig. 1 share much the same structure.", "This similarity allows for effortless generalization, effectively re-purposing a relatively simple executor (for literal language) flexibly across many tasks.", "The previous challenge involved literal language, amenable to symbolic execution.", "However, non-literal language is pervasive in everyday speech (Lakoff and Johnson, 1980).", "Consider the example in Fig. 2: the phrase head of the French Army is non-literal, implying that the army can be treated as a human body.", "The execution semantics of verbs like attacked and defend are also non-literal; they are highly contextual, requiring interpretation beyond word-sense disambiguation alone.", "Russian hackers attacked the Pentagon networks or The senator attacked the media entail very different simulations .", "This ambiguity is challenging for non-neural (symbolic) simulation-2 Segal et al. (2019) and Dua et al. (2019), respectively.", "To summarize, the limitations outlined above motivate the attempt to extend the capability of simulation to general linguistic inputs.", "Doing so would enable the construction of grounded, manipulable, and interpretable representations from text.", "Two desiderata follow from the challenges: (1) more flexible utilization of symbolic executors by exploiting shared (analogical) structures between texts ( 2.1), and (2) learned, neural executors for non-literal language comprehension ( 2.2).", "Turning to cognitive science for inspiration, we focus on embodied cognitive linguistics (ECL), an important paradigm directly addressing both desiderata.", "This section presents a brief overview and key tenets of ECL, specifically the theoretical foundations Lakoff and Johnson (1980) and Feldman and Narayanan (2004) developed.", "Most contemporary cognitive accounts of language incorporate concepts from ECL to some degree.", "A full review is out of scope of this work; see Gardenfors (2014) and 4, 5 for discussion in the NLU context.", "Early cognitive theories assumed a disembodied, symbolic representation of knowledge (Lewis, 1976; Kintsch and Van Dijk, 1978), separate from the brain's modal systems (vision, motor control, etc.).", "In contrast, the embodied cognition (EC) view, based on widespread empirical find-ings, focuses on the role of the body in cognition.", "In this view, knowledge is stored using multimodal representations (mental imagery, memories, etc.) that arise from embodied experience and action in the world (Barsalou, 2008; Proffitt, 2006).", "ECL postulates that linguistic representations and other, higher-level cognitive functions are deeply grounded in neural modal systems (Lakoff and Johnson, 1980; Barsalou, 2008).", "This view is compelling, as it addresses the grounding problem (Har-nad, 1990) by linking between high-level symbolic constituents of mental representations and experience or action in the physical world (Varela et al., 2017).", "Note that embodiment is far from an end-all for language comprehension: for example, social and cultural aspects too are crucial (Arbib et al., 2014).", "Still, ECL laid important conceptual foundations also underlying subsequent accounts: Embodied schemata: Pre-linguistic structures formed from bodily interactions and recurring experience, such as CONTAINMENT, PART-WHOLE, FORCE, MOVEMENT (Langacker, 1987; Talmy, 1985, 1983).", "Metaphoric inference: 3 The process by which new information may be inferred via structural similarities to a better-understood instantiated system (Lakoff and Johnson, 1980; Gallese and Lakoff, 2005; Day and Gentner, 2007).", "For example, I have an example IN mind suggests that the abstract concept mind is mapped to the more concrete domain of containers .", "Mental simulation.", "The reenactment of perceptual, motor, and introspective states acquired during experience with the world, body, and mind.", "In EC, diverse simulation mechanisms (also called mental or forward models (Rumle-hart et al., 1986; Grush, 2004)) support a wide spectrum of cognitive activities, including language and decision making (Barsalou, 2008).", "We believe that ECL is a useful paradigm for addressing the challenges of 2, as it articulates the role of analogy and mental simulation in NLU.", "The following two ECL hypotheses summarize 3 Also called analogical reasoning, we use metaphorical and analogical interchangeably.", "Hypothesis 1 (Simulation): Humans understand the meaning of language by mentally simulating its content.", "Language in context evokes a simulation structured by embodied schemata and metaphoric mappings, utilizing the same neural structures for action and perception in the environment.", "Understanding involves inferring and running the best fitting simulation.", "Human concepts are expressible through hierarchical, compositional, metaphoric mappings over a limited vocabulary of embodied schema.", "Abstract concepts are expressed using more literal concepts.", "Early ECL Implementations.", "Early attempts to implement ECL in actual language understanding systems were founded on Narayanan (1997)'s x-schema simulation framework and Embodied Construction Grammar (Bergen and Chang, 2005).", "While notable for approaching challenging problems involving mental simulation, and complex, metaphoric language, early implementation efforts were not operationalized to scale to mainstream applications (Lakoff and Narayanan, 2010).", "These works also focused on a particular type of simulation (sensorimotor), understood as only one mechanism of many used in language understanding (Stolk et al., 2016).", "FrameNet (Ruppenhofer et al., 2016) and MetaNet (David and Dodge, 2014) are closely related projects in that each provides an extensive collection of schemata used in everyday and metaphoric language comprehension, respectively, via the concept of a semantic frame (Fillmore, 1985).", "However, neither incorporates simulation semantics, as needed for a full realization of the ECL vision (Chang et al., 2002).", "We propose a unifying view of ECL, bringing it closer to contemporary cognitive science and deep learning approaches.", "This section presents notations and motivating intuitions, further developing the computational framework in 5, 6.", "The proposal centers around the view of natural language as a kind of neural programming language (Lupyan and Bergen, 2016), or higher-level cognitive control system for systematically querying and induc-Concept Symbolic ECL Embodied AI Primitives Basic data structures, operators, variables...", "ing changes in the mental and physical states of recipients (Elman, 2004; Stolk et al., 2016; Borghi et al., 2018).", "This approach builds on the ECL hypotheses and suggests a broader view of mental simulation, one that is readily amenable to the same computational formulation as current embodied AI and executable semantic parsing approaches.", "Preliminaries.", "At the core of embodied approaches is the Partially Observable Markov Decision Process (POMDP; Kaelbling et al., 1998).", "It governs the relations between states ( s ), actions ( a ), observations ( o ), and rewards ( r ).", "Of particular interest are the recognition O 1 : O S , policy : S A , and transition T : S A S functions.", "Focusing on mental simulation rather than actual external action, we assume a degree of equivalence between external and internal representations (Rumlehart et al., 1986; Hamrick, 2019).", "We consider internal mental states and actions ( s, a ), effecting change to mental models via a learned neural emulator T (Grush, 2004).", "Finally, language is considered a form of action (Glenberg, 2008) via external and internal utterances (i.e., semantic parses).", "Connecting symbolic & embodied language understanding.", "Table 1 presents a structured version of the neural programming language conceptualization.", "Importantly, this view highlights the important commonalities and differences between ECL and both symbolic programming languages , as well as embodied neural mechanisms , for perception and action.", "We illustrate these relations more explicitly through a comparison between ECL and executable semantic parsing (Table 1, bottom).", "Executable semantic parsing.", "Involves parsing a novel linguistic input o into a symbolic program a , whose execution 4 yields a desired goal state: T (cid:0) O 1 ( o ) , a (cid:1) = s .", "Executable semantic parsing focuses on action in an external, symbolic environment T , and typically doesn't address T , e.g., mapping a natural language question o directly to an executable query a on an SQL engine T .", "ECL semantic parsing.", "Shares the same structure as executable semantic parsing, with the important distinction that simulation is enacted via internal neural representations: T (cid:0) O 1 ( o ) , a (cid:1) = s .", "The fully neural formulation enables grounded understanding of non-literal language, demonstrated here for the Fig. 2 example.", "Metaphoric inference (hyp. 2) facilitates parsing a novel linguistic input o into internal, structured, neural state representations s , a .", "Accordingly, the utterance u =Napoleon, the head of the French Army might be parsed to an internal state s composed of a PART-WHOLE schema as shown in the figure.", "The phrase attacked the Russian fort could be grounded to a parse a driving simulation over MOTION and FORCE schemata.", "The requirement that s and a should afford mental simulation (hyp. 1) by the neural world emulator T marks an important difference from current neural word embeddings, one that contributes to deeper language understanding; in the resulting mental model T ( s, a ) , Napoleon and the French Army likely moved together due to the PART-WHOLE relation between them.", "This inference is non-trivial since it requires implicit 4 Slightly abusing notation, we apply T iteratively on a sequence of actions a = ( a 0 , ..., a L 1 ) .", "knowledge (heads and bodies often move together).", "Indeed, a SOTA NLI model 5 considers it very likely that the Fig. 2 sentence contradicts the entailment that The French Army moved towards the fort but did not enter it.", "To summarize: Executable semantic parsing approaches address grounding literal language to symbolic primitives; and metaphoric inference suggests a mechanism for grounding general language using neural primitives (schemata).", "Executable semantic parsing approaches utilize hard-coded, external symbolic executors , whereas ECL highlights the role of learned neural world emulators , as in current embodied research AI efforts (see 7.2).", "Formalizing the view characterized above suggests a novel computational model of language understanding.", "While current statistical models focus on the linguistic signal , research shows that most of the relevant information required for understanding a linguistic message is not present in the words (Stolk et al., 2016; David et al., 2016).", "Accordingly, the ECL view suggests shifting the focus to the mental models that communicators use, and the neural mechanisms used to construct them, e.g., mental simulation.", "What follows here adapts a relevant cognitive-inspired framework from general AI to the present NLU setting ( 5.1), and discusses computational challenges ( 5.2).", "Note that similar insights have been applied to multi-agent communication problems (Andreas et al., 2017), but their application to general NLU has been limited.", "The recently introduced Consciousness Prior (CP; Bengio, 2017) is a framework to represent the mental model of a single agent, through the notion of abstract state representations .", "6 Here, an abstract state corresponds with s ( 4), a low-dimensional, structured, interpretable state encoding, useful for planning, communication, and predicting upcoming observations (Francois-Lavet et al., 2019).", "One example is a dynamic knowledge graph embedding to represent a scene (Kipf et al., 2020).", "We adapt CP to a two-player cooperative linguistic communication setting (Tomasello, 2008).", "We assume a communicator ( A ) and recipient ( B ), as shown in Fig. 3. The computational problem of communicators is a meeting of minds (Gardenfors, 2014), or achieving some alignment of their mental models (Rumelhart, 1981; Stolk et al., 2016): the communicator A wishes to induce in B some (possibly ordered) set of goal abstract states G .", "We leave exploration of the communicator side to future work, and focus here on understanding.", "We assume that A sequentially generates utterances u t U (we assume equivalence between utterances u and observations o ) using an utterance model (Bengio, 2017).", "Analogously, B uses a comprehension model C s.t., s t = C ( s t 1 , u t ) .", "We assume that alignment is possible: there exists some sequence of utterances that will induce G .", "This framework is readily applicable to static text (reading comprehension).", "For example, in Fig. 1, G would be the sequence of desired states, and each sentence corresponds to an utterance ( u 1 = The world contains 2 crates.,...).", "We can now more precisely characterize the challenges that the recipient faces.", "At the root of the problem is the embodiment principle (Lawrence, 2017): human internal representations and computation capacity, as represented by s and T , respectively, are many orders of magnitude larger than their linguistic communication bandwidth.", "We note that though s t is only a subspace of the full mental state, following Stolk et al. (2016); Bengio (2017) we assume that it still holds that dim ( s t ) (cid:29) dim ( u t )", ".The embodiment principle dictates extreme economy in language use (Grice et al., 1975), and results in three major challenges: Common ground (prior world knowledge).", "Meaning cannot be spelled out in words but rather must be evoked in the listener (Rumelhart, 1981) by assuming and exploiting common ground (Clark and Schaefer, 1989; Tomasello, 2008), i.e., shared structures of mental representations.", "In other words, to achieve some aligned goal state g , the communicators must rely heavily on pre-existing similarities in s , a , and T .", "Developing computational versions of human world models ( T ) is likely AI-complete or close, but useful middle ground may be attained by partial approximations.", "Common ground (discourse).", "In the context of discourse, new information must be accumulated efficiently to update the mental model (Clark and Schaefer, 1989; Stolk et al., 2016).", "Consider Re-move all apples from the second crate (Figure 1).", "Full comprehension is only possible in the context of a sufficiently accurate mental model.", "Using our previous notations, the comprehension of u t depends both on the previous utterances u 1:( t 1) and intermediate mental model s t 1 .", "Abstract vs. Literal Language.", "Interpretation of literal language is relatively straightforward it is the language first acquired by children, directly related to the physical world.", "However, much of human language is more abstract, relying on metaphors borne of embodiment.", "The symbolic programming analog fails for utterances like these elections seem like a circus.", "Symbolic programming languages cannot handle nonliteral interpretations: how are elections like a circus ?", "This is related to selective analogical inference (Gentner and Forbus, 2011), closely related to ECL: not everything in the source domain (cir-cus) is mapped to the target (elections).", "Humans easily perceive the salient metaphoric mappings ( clown candidate ), but this feat remains extremely complex for machines.", "This section presents a schematic ECL-inspired architecture towards the implementation of the comprehension model ( C ), which addresses the challenges presented in 5.2.", "Fig. 4 shows the proposed architecture.", "For simplicity, the focus is on a static reading comprehension setting, but the architecture supports richer environments as well.", "The environment provides an interaction API to the agent, as well as the reward signal.", "The supported interaction may vary considerably depending on the task; for reading comprehension, it allows structured access to the text while supporting flexible reading strategies (Yuan et al., 2019).", "The flexibility is important for long documents, where navigation may be required (Geva and Be-rant, 2018).", "For executable semantic parsing, there might be external systems to interact with besides the text, such as a database (Liang et al., 2016).", "The agent architecture approximates the important ECL functions outlined in 4, and consists of four main modules:", "Memory .", "We distinguish between two forms of memory, the first an episodic, short-term mental model the system's current abstract state representation ( s t ).", "The symbolic programming analog is the execution trace of a program, containing the states of relevant working variables at each execution step.", "Fig. 4 displays the updated mental model, after the removal of the apples.", "Compiled knowledge , or long-term memory, reflects highly familiar object representations, behaviors and schemata, such as common sense, intuitive psychology and physics.", "The symbolic programming language analogs of this are libraries; largely static, hierarchical and compositional repositories of functions and Emulator NaturalLanguage Environment Agent \"Remove all apples from the second crate.\"", "classes.", "In the course of language interpretation, these libraries are importable: for the symbolic example in Fig. 4, the parser might instantiate a new variable of an imported type (e.g., crate2 = Container() ).", "Both types of memory are accessible for all components of the agent.", "Parser .", "Abstraction of higher-level perception, control, reasoning and linguistic functions.", "Handles interpretation of new linguistic inputs based on prior knowledge and the current mental state.", "Consonant with the view of analogy-making as a kind of higher-level perception or recognition (Mitchell, 1993), metaphoric inference is involved in grounding a novel input u t into internal, neural state representations s t , a t affording simulation.", "See Fig. 4 and Fig. 2 for examples on literal and non-literal language, respectively.", "Emulator .", "Functionally similar to the executor module in executable semantic parsing, but learned, and obviously far greater in scale.", "This module is an abstraction of neural emulation mechanisms ( T ), representing a wide range of functions, from lower-level motor control and imagery to higher-level models used for planning and theory of mind (Grush, 2004).", "It operates over the current mental model and semantic parse from the parser.", "The output is then an updated mental model.", "Importantly, the proposed architecture is designed to address the challenges outlined in 5.2; compiled knowledge underlies human common ground , the building blocks of s , a and T .", "Memory and emulation are instrumental for accumulation in discourse .", "The ability to understand abstract language involves all modules in the system.", "The architecture outlined in 6 is very ambitious; its implementation requires much further research.", "This section proposes a roadmap to this goal, identifying three sub-goals (Fig. 4), presented in order of increasing difficulty.", "Broadly speaking, the level of difficulty is determined by which components are assumed as given in the input (here this also means they are hard-coded in a symbolic programming language), and which must be learned .", "Observing that literal language is close to the embodied primitives level, its interpretation is simpler (than that of non-literal language, see 4).", "Therefore, in this phase, the emulator and compiled knowledge are hard-coded; here the focus is learning the parser.", "In other words, this sub-goal focuses on extending executable semantic parsing from relatively narrow domains to handle more general literal language on-the-fly, similarly to zero-shot semantic parsing (Givoli and Reichart, 2019).", "For the example in 2.1, the parser could be expected to infer the types (boxes as containers, fruits as objects) either by context (Yao et al. (2018) explore a preliminary schema-based approach) or explicit declarative language, using them to config-ure the emulator to handle the specific required problem setting (Tamari et al., 2020).", "As in similar projects exploring embodied understanding (Pustejovsky and Krishnaswamy, 2016; Baldridge et al., 2018), new simulator frameworks must be developed.", "While full embodiment calls for multiple modalities, the degree to which it is required remains an important open question (Lupyan and Lewis, 2019).", "Accordingly, and for immediate applicability to purely textual NLU problems we propose also focusing on the simpler setting of interactive text (Nelson, 2005).", "Recent research on text-based games shows how agents can learn to program in such languages (Cote et al., 2019; Ammanabrolu and Riedl, 2019), and how real language understanding problems can be framed as executable semantic parsing using config-urable text-based simulators (Tamari et al., 2019).", "This phase assumes that the compiled knowledge is given (hard-coded), and the parsing and emulator modules are neural (learned).", "A hard-coded emulator will likely be needed to train a learned emulator.", "The learned event execution of Narayanan (1997) provides a useful starting point towards computational models capable of such inference.", "In general, learned simulation is relatively unexplored in the context of natural language, though recent work has explored it in generated instruction following setups (Gaddy and Klein, 2019; Adhikari et al., 2020).", "Outside of NLU, learning structured world models is a long-studied, fast-growing field in embodied AI research (Schmidhuber, 1990; Ha and Schmid-huber, 2018; Hamrick, 2019; Kipf et al., 2020), and recently also in learned executors for neural programming (Kant, 2018).", "We expect much useful cross fertilization with these fields.", "This phase focuses on the component seemingly hardest to learn compiled knowledge.", "Out of scope here is fully neural setting where all components are jointly learned, as in continual learning research (Parisi et al., 2019).", "Instead, we focus on a simpler setting, in which the compiled knowledge is learned but represented by symbolic code; i.e., learning the static code library underlying the simulation framework.", "This sub-goal is relevant for training the parser ( 7.1) as well as the emulator ( 7.2), and can be pursued in parallel to them.", "In this setting, learning compiled knowledge is closely related to automated knowledge base construction (Winn et al., 2019) or frame induction from text (QasemiZadeh et al., 2019).", "Our proposed paradigm suggests enriching classic symbolic knowledge representations (Speer et al., 2017) to executable form (Tamari et al., 2020).", "Preliminary steps in this direction are seen in inferential knowledge bases such as ATOMIC (Sap et al., 2019), which provides limited execution logic using edges typed with if-then relations.", "Alongside FrameNet and MetaNet, others have collected schema and metaphor mappings, by learning them from large corpora (Beigman Klebanov et al., 2016; Gao et al., 2018).", "Pastra et al. (2011) built a database of concepts directly groundable to sensorimotor representations, primarily for robotics applications.", "This position paper has proposed an approach to representation and learning based on the tenets of ECL.", "The proposed architecture, drawing on contemporary cognitive science, aims to address key limitations of current NLU systems through mental simulation and grounded metaphoric inference.", "We outlined major challenges and suggested a roadmap towards realizing the proposed vision.", "Growing empirical evidence shows that language is intricately intertwined with a vast range of other neural processes.", "Accordingly, this work suggests a symbiotic view of cognitive science, embodied AI, and computational linguistics.", "By sharing common foundational problems, these fields may better share and co-evolve common solutions.", "Finally, we believe that attaining deeper language understanding must be a large scale effort, beyond the scope of any one research group.", "We hope that the paradigm presented here will help provide coherence to such efforts.", "One of our main goals was to stimulate a discussion; moving forward, we welcome comments, feedback, and suggestions.", "We thank the reviewers for their insightful comments.", "This work was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant no. 852686, SIAM) and NSF-BSF grant no. 2017741 (Shahaf), as well as the Israel Science Foundation grant no. 929/17 (Abend)." ]
[ "abstain", "objective", "abstain", "abstain", "abstain", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "objective", "abstain", "abstain", "objective", "method", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "method", "method", "objective", "other", "other" ]
[ "We address the task of automatically grading the language proficiency of spontaneous speech based on textual features from automatic speech recognition transcripts.", "Motivated by recent advances in multi-task learning, we develop neural networks trained in a multi-task fashion that learn to predict the proficiency level of non-native English speakers by taking advantage of inductive transfer between the main task (grading) and auxiliary prediction tasks: morpho-syntactic labeling, language modeling, and native language iden-tification (L1).", "We encode the transcriptions with both bi-directional recurrent neural networks and with bi-directional representations from transformers, compare against a feature-rich baseline, and analyse performance at different proficiency levels and with transcriptions of varying error rates.", "Our best performance comes from a transformer encoder with L1 prediction as an auxiliary task.", "We discuss areas for improvement and potential applications for text-only speech scoring.", "The growing demand for the ability to communicate in English means that both academic and commercial efforts are increasing to provide automated tutoring and assessment systems.", "These educational systems address the increasing need for online resources to help students learn and to map users to the validated proficiency scales which play a critical role in securing education and work opportunities (British Council, 2013).", "Language learning applications delivered through smart speakers such as Amazon Alexa and Google Home are a novel form of educational technology.", "These offer obvious benefits to users in terms of immediacy, interaction and Currently at Google U.K. convenience.", "However, it remains challenging for application providers to assess language content collected through these means.", "Audio recordings are not returned to the developers for privacy reasons: instead only text responses are returned, the output of automated speech recognition (ASR) systems.", "This sets a new task in educational applications: the automated proficiency assessment of speech based on transcriptions alone.", "In this paper we report on our efforts to grade learner English transcriptions obtained from ASR systems, comparing a feature-rich baseline with neural networks trained on multi-task objectives.", "To assess spontaneous speech, automated grading systems tend to use a combination of features extracted from the audio recording and the transcription resulting from ASR.", "For instance, SpeechRater TM by the Educational Testing Service uses text-based features based on frequency counts and lexical unigrams among others, the number of word tokens per second, the length of interpausal units in words, the vocabulary size normalized by recording duration and score predictions are made using linear regression (Zechner et al., 2007, 2009; Higgins et al., 2011).", "However, without the audio recordings, proficiency scoring must be performed based on the text alone.", "Thus robust methods for text-only speech scoring need to be developed to ensure the reliability and validity of educational applications in scenarios such as smart speakers.", "Relatively few automated speech graders use neural approaches that incorporate text-based features from transcripts.", "Chen et al. (2018) used a linear regression model on the concatenated high-level representation outputs of two separate RNNs for sequential audio and text inputs; Qian et al. (2018) use a bi-directional RNN which uses word embeddings concatenated with an encoding of the given prompt and an attention mechanism over all tokens to predict grades.", "In this work, we address the task of automatically grading the language proficiency of spontaneous speech based on ASR transcriptions only, and seek to investigate the extent to which current state-of-the-art neural approaches to language assessment are effective for the task at hand.", "Specifically, we make the following contributions:", "1. We develop a multi-task framework that leverages inductive transfer between our main task (grading spoken language proficiency) and auxiliary objectives predicting morpho-syntactic labels, the learner's first (native') language (L1) and language modeling (LM).", "2. We investigate the performance of two encoder types for the speech scoring task: bidirectional recurrent neural networks, and bidirectional representations from transformers.", "3. We analyze model performance under different conditions: namely, with and without filled pauses included in the transcriptions, with varying rates of word error in the ASR transcriptions, and according to the proficiency of the student response.", "4. We make our code publicly available for others to use for benchmarking and replication experiments.", "1 In contrast to feature-based scoring, we instead train neural networks on ASR transcriptions which are labeled with proficiency scores assigned by human examiners, and guide the networks with objectives that prioritize language understanding.", "To the best of our knowledge, there has been no previous work using text-based auxiliary training objectives in automated speech grading systems.", "Automated grading of student responses to exam questions until recently tended to adopt feature-based approaches to score prediction, for instance using distinctive word or part-of-speech n -grams (Page and Paulus, 1968; Attali and Burstein, 2004; Bhat and Yoon, 2015; Sakaguchi et al., 2015), as well as grammatical errors and phrase-structure rules (Yannakoudakis et al., 2011; Andersen et al.,", "automated-english-transcription-grader ; the corpus we work with is not publicly available as it is private exams data, but the code repository allows you to work with any set of English texts and proficiency scores.", "2013).", "More recently, word and character embeddings have served as input to deep neural network models, with a final regression layer predicting the score (Alikaniotis et al., 2016; Taghipour and Ng, 2016; Dong et al., 2017; Jin et al., 2018).", "The advantage of the latter approach is the relative ease of data pre-processing since text representations are learned through distributional methods rather than hand-crafted features.", "The field of NLP has seen advances recently thanks to a shift from fixed word embeddings to contextualized representations such as ELMo (Pe-ters et al., 2018) and those which can be obtained from large transformer models such as BERT (De-vlin et al., 2019).", "Similarly in text scoring, some have incorporated contextualized word embeddings to improve performance (Nadeem et al., 2019).", "We now apply such approaches to the grading of spoken transcriptions in a scenario where the audio, or information derived from it, is not available.", "In other words the task is analogous to essay scoring except for the presence of characteristic speech features such as false starts, repetitions and filled pauses (Moore et al., 2015; Carter and McCarthy, 2017).", "This poses a particular challenge as most models used in data pre-processing and representation learning have been trained on written not spoken texts (Caines et al., 2017).", "Furthermore, most existing approaches to speech grading do have access to audio features, and indeed extract a large number of prosodic or duration-based features (Zechner et al., 2009; Higgins et al., 2011; Loukina et al., 2017; Wang et al., 2018a).", "Prosodic and phonological features extracted from the audio and ASR model are undoubtedly useful for human assessment of speech proficiency and for providing feedback.", "On the other hand, previous work suggests that models trained solely on ASR text-based features are competitive with those using only acoustic features or a combination of the two (Loukina and Cahill, 2016).", "Their interpretation of these results was that the transcription offers some proxy information for prosodic and phonological performance for instance the presence of hesitation and silence markers, the number of word tokens in the transcription, and the transcription errors which might arise from mispronunciations.", "We instead allow our models to learn from auxiliary (morpho-syntactic and other) tasks: multi-task learning has been shown to help in automated essay 2260 Train Valid Test Total Candidates 691 297 225 1213 Transcriptions 4,589 1,982 1488 8,059 Total words 205,311 91,224 67,832 343,367 Mean response length (words) 44.7 46.0 45.6 42.6 Table 1: Training, validation and test split statistics.", "scoring (Cummins and Rei, 2018) and grammatical error detection of learner English essays (Rei and Yannakoudakis, 2017), whilst information about a learner's native language has been shown to help in error detection for English and the grading of Norwegian essays (Rozovskaya and Roth, 2011; Johan Berggren et al., 2019).", "Furthermore, multi-task learning objectives can allow the model to learn more general features of language and composition, and a much richer set of representations (Sanh et al., 2019), without relying on the availability of any external linguistic tools or annotations at inference time.", "We train our models using spoken responses collected from candidates taking Cambridge Assess-ment's BULATS examination 2 .", "The spoken section of the BULATS exam tests candidates' proficiency in business English through monologue responses to a series of prompts.", "The candidate may speak for up to one minute in each response and we include only the prompts which invite spontaneous responses (we exclude the prompts which require reading aloud of given sentences, and prompts asking for personal information about the candidates).", "There are seven such prompts in each exam.", "Forty-six unique versions of the BULATS exam are represented in the training and test sets, meaning that there are 322 unique prompts ( 7 46 ).", "Each response has been assigned a score between 0 and 6 by expert human examiners, with scoring increments of .5 available and with each whole integer mapping to a proficiency level on the Common European Framework of Reference for Languages (CEFR): a fail (score of 0 ), beginner (scores of 1 , 2 : A1 and A2); intermediate (scores 3 , 4 : B1 and B2); advanced (scores 5 , 6 : C1 and C2).", "Examiners are required to consider five attributes of each candidate's speaking proficiency: pronun-2 https://www.cambridgeenglish.org/ exams-and-tests/bulats ; now discontinued and replaced by the Linguaskill Business exam.", "ciation, hesitation, language resource, coherence and task achievement.", "In the transcription-only scenario, we cannot assess the first component, have only a proxy for the second in terms of filled pause occurrence (umm', err', etc ), but still have access to the other three components through the ASR transcriptions.", "Our data comes from 1213 exam candidates with six first languages in approximately uniform distribution: Arabic, Dutch, French, Polish, Thai and Vietnamese.", "The distribution of candidates over proficiency levels is approximately normal, with a peak over the intermediate scores (Figure 1).", "The train/validation/test split across candidates is roughly 55 : 25 : 20 as detailed by Table", "1. Each candidate's recordings are transcribed by a teacherstudent ASR system with a lattice-free maximum-mutual-information acoustic model (Kanda et al., 2017).", "The teacherstudent training procedure uses KullbackLeibler divergence between the word sequence posteriors from the student model and a teacher ensemble as the loss function (Wong and Gales, 2016).", "The result is a computationally efficient ASR system, as the student is able to decode in a single run to a similar level of performance as an ensemble decoder requiring multiple runs (Hinton et al., 2014).", "There is more information about the ASR system in Wang et al. (2018b).", "We also evaluate performance on manual transcriptions of the test set, in order to assess the impact of ASR errors on our models.", "A native speaker of English was asked to transcribe the recordings as faithfully as possible to include hesitations, disfluencies and partial words.", "A subset of 230 recordings were transcribed by a second native speaker: inter-annotator agreement on this subset is high (Cohen's = . 898 ).", "Compared against the annota-tor's manual transcriptions, the word error rate of the ASR is 19 .", "5% overall, but with variance from 32% for speakers with a score of 1 , to 15% for speakers with scores 5 and 6 .", "To be able to predict morpho-syntactic labels, 2261 Figure 1: Distribution of proficiency scores in the training and test sets.", "we parse the data using UDPipe (Wijffels, 2018), trained on the Universal Dependencies (UD) English Web Treebank 2.4 made up of 255 k words and 16 .", "6 k sentences from weblogs, newsgroups, emails, reviews, and Yahoo! answers (Silveira et al., 2014).", "We use UDPipe to automatically generate Penn Treebank part of speech (POS) tags (Taylor et al., 2003) and UDs (Nivre et al., 2016) for our training data.", "Filled pauses were excluded before parsing, so that they would not affect the parse of other words in the transcription, but were then re-inserted with null parse values, in case they serve as a useful signal to the language proficiency models.", "Transcriptions were parsed as whole units: we did not attempt to delimit speech-units.", "For the most part this results in fairly lengthy, but not impractically long, word sequences.", "The ASR transcriptions are on average 44 word tokens long ( = 33 . 0 ), with a minimum of 2 tokens, a maximum of 179 , and 50% of the texts being between 23 and 54 tokens long.", "As seen in Figure 2, the distribution of transcription length differs according to proficiency level: the failing grades tend to be very short responses, the beginner level responses are a little longer, and the bulk of intermediate responses are between 25 and 50 tokens long (recordings are between 20 and 60 seconds duration).", "The speech grader 3 takes a sequence of token embeddings [ x 1 , . . . , x n ] as input and predicts a proficiency level score.", "Tokens are first converted to vector representations x t , and then passed through an encoder.", "We trial two different encoders: a bidirectional LSTM (Hochreiter and Schmidhuber, 1997) and BERT (Devlin et al., 2019).", "The encoding is passed through the prediction head, a series of linear layers and activation functions, where the final activation function is bound to the scoring scale ( 0 6 ).", "The model uses mean squared error (MSE) as the loss function E score for the main task.", "LSTM encoder The bi-directional LSTM encoder uses the word-level tokenization provided by UDPipe.", "For each token, the hidden states of the two LSTMs are concatenated, creating a context-aware hidden state h t = [ h t ; h t ] .", "The hidden layers that are formed at the final timesteps of the bidirectional LSTM ( h 1 , h n ) are concatenated for the scoring prediction head.", "BERT encoder The BERT encoder uses a pre-trained model checkpoint and tokenizer, specifically bert-base-uncased , provided by the Hug-gingFace Transformer library (Wolf et al., 2019).", "BERT's tokenizer uses the WordPiece model (Zhang, 2016), resulting in a much larger vocabulary than the LSTM encoder.", "BERT embeddings are extracted from a transformer trained with a masked LM objective: a percentage of input tokens are masked and then the network learns to predict the masked tokens.", "BERT is also trained with a second objective: given two input sequences, it predicts whether one sequence directly follows another.", "A sequence level embedding is produced by pooling the hidden states of the special first token, [CLS], resulting in a 768 dimensional embedding.", "Auxiliary objectives We further extend the model to incorporate auxiliary objectives, and experiment with four different tasks: language modelling (LM), native language prediction (L1), POS-tagging, and UD prediction where we predict the UD type of a dependent with its head (see Section 3).", "These auxiliary objectives are based on previous work indicating that learning to make such predictions aids in tasks such as essay scoring and grammatical error detection (Cheng et al., 2015; Rei and Yannakoudakis, 2017; Cummins and Rei, 2018; Johan Berggren et al., 2019; Bell et al., 2019).", "Specifically, for the last three tasks, we predict a label y per word x t (Figure 3; left).", "Each task s is assigned an individual prediction head, identical to the scoring head described above, followed by a softmax layer that produces a probability distribution over the set of output labels to replace the bounded scoring activation function.", "When using BERT, our model only predicts labels for auxiliary objectives on the first token of a word, in an identical fashion to Devlin et al. (2019)'s evaluation of BERT on named entity recognition.", "The LM objective is implemented differently for each model.", "The LSTM (Figure 3; right), has two additional hidden layers (Rei, 2017): m t = tanh W l h t and m t = tanh W l h t , where W l and LM L1 POS UD LSTM 0.1 0.01 0.005 0.001 BERT 0.05 0.5 0.1 0.01 Table 2: Weighting values for auxiliary objectives scores for the LSTM and BERT encoders.", "W l are direction-specific weight matrices.", "The surrounding tokens w t 1 and w t +1 are then predicted based on each hidden state using a softmax output layer.", "In contrast, the BERT model implements the same masked language modeling objective as utilized during pre-training.", "We implement this identically to Devlin et al. (2019): 15% of tokens in the sequence are randomly selected to be masked, and of those, 80% are masked, 10% are replaced with another token and 10% are unchanged.", "The loss is only computed over the selected tokens.", "Note that filled pauses are not utilized for auxiliary objectives.", "The overall loss function E is adapted using a similar approach to Cummins and Rei (2018): a weighted sum of the scoring loss (main task) E score and the auxiliary task losses E aux , where T is the total number of auxiliary tasks.", "All of the auxiliary tasks use cross-entropy loss where y x,l is the predicted probability of token x having label l , and y x,l has the value 1 when l is the correct label for token x and 0 otherwise.", "Model hyper-parameters are tuned based on MSE on the validation set.", "The model is optimized using Adam (Kingma and Ba, 2014), with a learning rate of 0 .", "001 that linearly decreases during training, for 3 5 epochs (when trained with no, 2263 RMSE PCC 0 .", "a single, or multiple auxiliary objectives respec-tively).", "Responses are processed in batches of 8 and are padded/truncated to a length of 128 .", "LSTM token embeddings of size 300 are randomly initialized and fine-tuned during training.", "4 The LSTM has 3 hidden layers with hidden state sizes of 256 for each direction.", "Weightings for each of the auxiliary objectives were selected by evaluation on the validation set and are outlined in Table", "2. Baseline model Our baseline approach is a feature-based model of the type which has been used in previous research (Vajjala and Rama, 2018; Yannakoudakis et al., 2018).", "Specifically, we train a linear regression model and use as features tf idf weighted word and POS n -grams (up to tri-grams), grammatical constructions extracted from the phrase-structure trees, the length of the transcript, and the number of errors, estimated by counting the number of trigrams that are absent from a large background corpus of correct English (Fer-raresi et al., 2008).", "Evaluation Our primary metric is root-mean-square error (RMSE), which results in real valued average distances from the gold standard examiner scores on our 06 scale.", "For each model we also report Pearson's correlation coefficient with the true scores and the percent of predictions which are within a half or one score from the reference score ( 0 . 5 and 1 . 0 ).", "These can be thought of as tolerable error thresholds where being out-by-two can have severe consequences for the student (for example, affecting employment or education prospects).", "Bear in 4 Initial experiments showed that fixed pre-trained word embeddings such as GloVe (Pennington et al., 2014) do not improve performance further.", "mind that human examiners are thought to correlate on proficiency scoring at about 0 .", "8 , and that most exams are graded by a single examiner, and the idea of tolerable error becomes relevant to human as well as machine scoring.", "It would be a useful exercise to collect within 0 .", "5 and within 1 .", "0 scores from human examiners.", "We ran a series of experiments to analyze the impact that data pre-processing and encoder design have on the performance of our automated speech grader.", "All results presented are computed over 10 repetitions, include filled pause information and use an ASR system with a WER of 19 .", "5 % (see Section 3) unless otherwise stated.", "Table 3 compares the results for the two different encoders: LSTM and BERT.", "Using BERT significantly increases the performance of the speech grader, RMSE reduces by approximately 0 .", "1 and the number of responses graded within 0 .", "5 or 1 point of examiner provided score increases by approximately 5 .", "5 %.", "Our results, in Table 3, indicate that certain auxiliary objectives can improve the performance of our automated speech grader.", "The LSTM gains significantly when applying multi-task learning from POS, UD or LM prediction tasks.", "It is also possible that these objectives help to account for errors in ASR by identifying instances where the expected word or morpho-syntactic label differs from the provided input.", "We also trained models for all possible combinations of auxiliary objectives.", "While several of these were significantly better than the scoring only model, only one, LSTM with POS+UD+L1 (combo'), produced better results than the best performing single task model.", "These results were not significantly better than the single-task POS prediction model, though we did not explore tuning the alpha weighting values for the combination models.", "In contrast, BERT only receives a significant improvement in grading ability when using the L1 prediction task.", "Since BERT already has linguistic knowledge from external pre-training, it is likely that the L1 prediction helps to identify mistakes that are typical of particular L1 learners and the level of proficiency these errors equate to.", "No combinations of auxiliary objectives led to any improvement for the BERT encoder.", "To investigate the impact that ASR system quality has on an automated speech grader, we train models using output from ASR systems with varying word error rates.", "We then evaluate these models on output from each ASR system to analyze the grader's dependence on the word error idiosyncrasies of the system used during training.", "We also evaluate on manual transcriptions provided by annotators.", "The ASR systems have WER's of 25 .", "5 %, 21 .", "7% and 19 .", "5% on the test set.", "Figure 4 shows, as expected, that training a speech grader with data from an ASR system with lower word error rates produces better results.", "However, it is interesting to note that this holds true even when evaluating with data from inferior ASR systems.", "These results suggest that the speech grader is relatively invariant to the quality of the ASR it is being evaluated on within the range of word error rates we have tested.", "Difference in ASR quality has a bigger influence on the RMSE when using an LSTM encoder compared to a BERT encoder.", "BERT's tolerance for errors in input makes sense when considering that one of its training objectives attempts to recover the ground truth after the input is perturbed.", "Interestingly, both models perform poorly on manually transcribed data.", "A contribution to this is the quality of the manual transcriptions themselves, which will have an error rate far below those of the ASR systems.", "Moreover, three fundamental differences in transcription format are that the human transcriber has access to an unclear' token for occasions where the audio quality is poor or the candidate's voice is obscured: the ASR on the other hand will attempt to transcribe such portions of the audio with real words from the vocabulary.", "Secondly, there are many more filled pauses in the human transcriptions than in the ASR: in total 9% of word tokens are filled pauses in the manual transcription, versus 5.1% for the best ASR.", "Thirdly, the manual transcriptions are about 7% longer than the machine transcriptions, a consequence of the human transcribers more accurately picking up details in the audio recording, and transcribing more words than the ASR systems.", "All these differences mean that the manual transcriptions are quite different from the ASR transcriptions the speech graders are trained on, therefore the models perform less well.", "Though this task aims to utilize only textual features to perform automated speech grading, limited", "fluency information is available via the filled pause tokens output by the ASR system.", "These tokens are inserted into a transcription when the ASR has recognized one of a finite set of forms such as, err', umm', etc .", "We examine the dependence of our automated speech graders on filled pauses to accurately predict proficiency in two ways.", "Firstly, we train and evaluate models without filled pause information.", "Secondly, we evaluate models trained with filled pause information on the test set with filled pause information removed.", "Removing filled pause tokens when training and evaluating produced better results for both speech grader models, but not significantly so (Table 4).", "However, when evaluating a model trained with filled pause information on ASR output excluding filled pauses, the BERT model significantly worsens (RMSE 0 . 926 versus 0 . 921 ).", "This suggests that filled pauses only add noise to the training process, and that they should be excluded before auto-marking takes place.", "We further inspected the occurrence of filled pauses in the training and test sets, and found no strong correlation between the filled pause frequencies in the transcriptions and the gold scores awarded by the examiner ( = 0 . 0268 ).", "This either indicates that the candidates hesitate as much as each other no matter their proficiency level, perhaps due to the pressure of an exam setting or the task of spoken monologues in a second language, or it indicates that filled pauses are a ubiquitous feature of spoken language used for planning and discourse management purposes (Maclay and Osgood, 1959; Clark and Fox Tree, 2002; Tottie, 2019).", "In any case, by removing them from the transcriptions, both the LSTM and BERT models are better able to assign a proficiency level to the text.", "To assess the performance of the baseline against our best LSTM combo and BERT+L1 models at different proficiency levels, we treated our seven integer scores (from 0 to 6 ) as classes, rounding .", "5 scores up, and evaluated RMSE, within 0 .", "5 and within 1 .", "0 on a per-level basis (Table 5).", "Recall that 0 maps to a failing grade, scores of 1 and 2 are classed as beginner, 3 and 4 as intermediate proficiency, and 5 6 as an advanced learner of English.", "We see that the baseline performs relatively well largely because of strong performance in the range 2 to 4 where its RMSE is almost as low as those for BERT+L1, and its within 0 .", "5 and 1 .", "0 percentages are higher.", "This is because the baseline largely predicts scores in that range, 2 to 4 (90% of its predictions), whereas we see a greater spread of 2266 scores predicted by the LSTM and BERT models and consequent improvements at the edges of the scoring range.", "RMSE generally decreases as we move from the baseline to LSTM combo to BERT+L1.", "BERT+L1 is much better than LSTM combo at predicting scores of 0 , performs about the same for scores of 1 and 2 , and then improves again towards the upper end of the scoring scale.", "Even with BERT+L1 there is variance in performance by proficiency level.", "The most difficult to grade accurately are those responses at the top and bottom of the scoring scale.", "This seems more a reflection of the distribution of training data we obtained, rather than an inherent linguistic difficulty in identifying low or high performance English: the bulk of training instances are between 3 and 5 (Figure 1), and it is possible that the models drift towards the central grades as an example of more conservative learning.", "This merits further investigation in future, either by data down-sampling to balance the training distribution, or artificial error generation to up-sample the edge cases.", "We presented an effective approach to grading spontaneous speech based on ASR transcriptions only, without direct access to the audio recording or features derived from it.", "Our best performing model involves a BERT encoder with first language prediction as an auxiliary task.", "We showed that this model improves on alternative LSTM-based models, and over a feature-rich baseline, by better predicting scores at the edges of the proficiency scale, while also offering (smaller) gains at the central points on the scale.", "Its error is on average less than 1 , and 76% of its predictions are within 1 grade of the examiners' gold scores.", "We recognise that without the audio signal, some information is lost that would be useful for speech assessment namely prosodic and phonemic features but that assessment on transcriptions alone has a use case in educational technology for home assistants.", "Furthermore such applications may become increasingly relevant as organisations reduce the types of data they collect from the end user due to privacy concerns.", "Further work should be undertaken in terms of scoring validity and the robustness of such an approach, before such models are applied to any high stakes' (i.e. exam) scenario, as opposed to the kind of at-home practice apps we have discussed in this paper.", "We also showed that the models improve as they are trained on increasingly accurate ASR transcriptions, though performance deteriorates when they are evaluated on manual transcriptions.", "We surmise that this is because of stylistic differences in the machine and human transcriptions, and that adaptation of the models to manual transcriptions will help mitigate the drop in performance.", "Additional experiments indicated that the removal of filled pauses from the transcriptions was beneficial to the scoring models, and that scoring performance is best for the middle grades of the scoring range.", "Further research is needed to improve machine assessment at the upper and lower ends of the scoring scale, although these are the scores for which the least training data exists.", "Therefore future work could include different sampling methods, generation of synthetic data, or training objectives which reward models which are less conservatively drawn to the middle of the scoring scale.", "Finally, we acknowledge that speaking proficiency in a second language is a multi-faceted construct made up of more than the features which can be drawn from transcriptions (Galaczi et al., 2011; Lim, 2018).", "For instance, the speaker's prosody, pronunciations and disfluencies are also contributing factors.", "However, given the text-only constraints faced by third-party application developers for home assistants, the proficiency assessment models we present in this work allow for progress in providing low-stakes assessment and continuous practice for language learners, with the caveat that fuller speaking skills should be taught and assessed with the complete construct in mind.", "This paper reports on research supported by Cambridge Assessment, University of Cambridge.", "We thank Kate Knill of the Engineering Department, University of Cambridge for access to the BULATS datasets, as well as Manny Rayner and Nikolaos Tsourakis at the University of Geneva for helpful discussion.", "We also thank the NVIDIA Corporation for the donation of the Titan X Pascal GPU used in this research.", "The first author was funded by the Searle Fund, the Benson & Carslaw Fund, and Emmanuel College, Cambridge." ]
[ "abstain", "objective", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "objective", "objective", "objective", "objective", "method", "method", "objective", "objective", "other", "abstain", "other", "other", "other", "other", "other", "method", "other", "other", "other", "other", "other", "other", "abstain", "other", "other", "method", "abstain", "abstain", "abstain", "abstain", "other", "other", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "abstain", "result", "abstain", "abstain", "abstain", "abstain", "result", "abstain", "method", "objective", "result", "abstain", "abstain", "abstain", "method", "result", "method", "abstain", "abstain", "abstain", "abstain", "abstain", "method", "other", "other", "other", "other" ]